report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
---|---|
DOD’s primary military medical mission is to maintain the health of 1.6 million active duty service personnel and be prepared to deliver health care during wartime. Also, as an employer, DOD offers health services to 6.6 million additional military-related beneficiaries, including active duty members’ dependents and military retirees and their dependents. Most care is provided in 115 hospitals and 471 clinics—called military treatment facilities—operated by the Army, Navy, and Air Force worldwide. This direct delivery health system is supplemented by DOD-funded care provided in civilian facilities. In fiscal year 1997, DOD spent about $12 billion for direct care and about $3.5 billion for civilian facility care. In the late 1980s, in response to increasing health care costs and uneven access to care, DOD initiated, with congressional authority, a series of demonstrations to evaluate alternative health care delivery approaches. On the basis of this experience, DOD designed TRICARE as its managed health care program. TRICARE is intended to ensure a high-quality, consistent health care benefit; preserve choice of health care providers for beneficiaries; improve access to care; and contain health care costs. TRICARE is designed to give beneficiaries a choice among three approaches to health care: TRICARE Prime, an HMO-like option; TRICARE Extra, which is similar to a preferred provider option; and TRICARE Standard, a fee-for-service-type option. The TRICARE program uses regional managed care support contracts to augment its MTFs. The contractors’ responsibilities include developing civilian provider networks, performing utilization management functions,processing claims, and providing such support functions as beneficiary education and enrollment. The 11 TRICARE regions in the United States are covered by seven managed care support contracts, and health care delivery has commenced under five of the contracts (see fig. 1). The Office of the Assistant Secretary of Defense (Health Affairs) (hereafter referred to as Health Affairs) sets TRICARE policy and has overall responsibility for the program. The managed care support contractors are overseen by the TRICARE Support Office (TSO), a part of Health Affairs. The Army, Navy, and Air Force Surgeons General have authority over the MTFs in their respective services. To coordinate MTF and contractor services, each region is headed by a “lead agent,” which is led by a designated MTF commander and supported by a joint-service staff. The lead agent responds to direction from Health Affairs, but the services retain authority and control over their medical facilities and personnel. Therefore, lead agents seek to affect operations by working cooperatively with the MTFs in their region and the regional managed care support contractor. DOD conducts beneficiary satisfaction surveys—a common private sector health care practice—to measure TRICARE’s performance and reports the results throughout the MHS. Health Affairs currently conducts two such ongoing surveys: an annual systemwide survey of all eligible beneficiaries and a monthly survey of patients’ perceptions of outpatient visits at MTFs.Both surveys are based on widely used private sector survey instruments. Health Affairs’ TRICARE Marketing Office also conducted a survey of TRICARE Prime enrollees’ satisfaction in 1996. Health Affairs officials told us that a systemwide survey targeted to MTF inpatient care is currently being planned, and a survey targeted to civilian TRICARE network care is under discussion. DOD policy requires most other beneficiary surveys—whether proposed by the services, MTFs, or managed care support contractors—to first be approved by Health Affairs. The annual surveys have indicated generally high overall satisfaction levels, with mixed results for satisfaction with particular aspects of military health care. The MTF outpatient surveys have shown satisfaction levels higher than civilian HMOs’, and the TRICARE Prime enrollee survey showed satisfaction levels somewhat lower than those of the private sector. However, officials also told us that it is too soon to use DOD’s survey results as a measure of TRICARE’s overall success. Detailed descriptions of the surveys are provided in appendix II. Public Law 102-484 requires DOD to conduct an annual beneficiary survey. The survey’s purpose is to provide a comprehensive look at how beneficiaries view their health care—including their health status, the availability of health services, and related matters. The questions and scales used in the annual survey were based on private sector surveys that had been extensively tested for reliability and validity. DOD uses the survey responses to represent all eligible beneficiaries’ views and reports results for each MTF catchment area. DOD’s 1996 annual survey results show that active duty family members’ satisfaction generally increased when compared with 1994-95 results, while satisfaction decreased for retirees and their family members. But retirees’ satisfaction generally remained higher than that of active duty family members in both surveys. Moreover, active duty family members’ satisfaction was slightly higher in regions in which TRICARE had been implemented than in the other regions. In the 1994-95 survey, retirees and their family members in TRICARE regions reported higher satisfaction than their counterparts in the other regions, but in 1996 the two results were about the same, as shown in figure 2. While overall satisfaction levels were fairly high, satisfaction with certain aspects of military health care was somewhat lower, according to the 1996 annual survey (see fig. 3). DOD survey officials told us it was too soon to use these annual survey results to assess TRICARE because the program is new and not yet implemented nationwide. Also, they said two surveys constitute an insufficient basis from which to identify trends, and several more annual surveys are needed of the fully implemented program before the results can be used as an overall system performance measure. Nonetheless, the lead official for DOD’s survey efforts told us of uses already being made of the annual survey’s results. For example, the 1994-95 results showed that beneficiaries were more satisfied with civilian care than with military care, which led Health Affairs and the service Surgeon General offices to design a survey targeting MTF outpatients’ perceptions of the care they received. (This survey will be discussed further below.) Also, in implementing its new Enrollment Based Capitation financing approach, DOD is using the annual survey’s health status measures and results to adjust the various MTF enrollee populations for their projected health care needs. DOD is risk-adjusting the enrollee populations on the basis of such demographic factors as age, sex, beneficiary category, and military service, which correlate with differing health care service need levels. Health Affairs also conducts a monthly MTF survey of patients’ perceptions of outpatient visits. The survey provides detailed information on specific visits to individual clinics at all MTFs in the 50 states. Health Affairs officials told us that because the 1994-95 annual survey results showed that beneficiaries were more satisfied with civilian care than with military care, this survey was designed to more closely examine MTF care. The MTF outpatient survey was also based on survey questions developed, tested, and used by the private sector, which has facilitated comparisons of MTF and civilian care satisfaction levels. Health Affairs provides detailed survey results reports to MTFs and summary reports to lead agents and service commands. DOD provided us with April, May, and June 1997 MTF outpatient survey results for each service and region. The results measure satisfaction on a 5-point scale in three areas: (1) access to care for a single visit, (2) quality of care during that visit, and (3) staff interaction with the survey respondent during the visit. The reports also include private sector survey results that show how civilian HMO users rate their satisfaction in the same areas. Figure 4 shows results for the entire MHS, each service, and the civilian managed care industry. Satisfaction among the three services’ MTFs is similar, and averages for all three are somewhat higher than national civilian benchmarks. Results by region are also consistent across the MHS, and all of the region averages exceed civilian HMO benchmarks. See appendix II for each region’s results and comparative civilian HMO scores in corresponding geographic areas. In 1996, to help direct TRICARE marketing and beneficiary education efforts, Health Affairs’ TRICARE Marketing Office conducted a telephone survey of beneficiaries enrolled in TRICARE Prime. The survey addressed enrollees’ understanding of the Prime program, satisfaction with program aspects, perceptions about access and quality changes after Prime’s implementation, and intentions regarding reenrolling in TRICARE Prime. Health Affairs compared the survey results with civilian managed care programs’ satisfaction levels. DOD’s survey report describes high overall satisfaction levels, with about two-thirds of Prime enrollees either satisfied or very satisfied with TRICARE, and slightly higher ratings from non-active duty TRICARE Prime enrollees. Only 7 percent of respondents said they were unlikely to reenroll in TRICARE Prime, while 88 percent said they were likely or very likely to do so. DOD reported, however, that overall satisfaction levels with TRICARE Prime trailed the civilian sector average by about 16 percentage points. The report notes, though, that the results may be skewed by response format differences between DOD’s questionnaire and the civilian instrument. Table 1 shows the survey results for overall satisfaction. DOD does not currently conduct systemwide surveys targeted to MTF inpatient or network user satisfaction. However, Health Affairs officials told us that a working group of representatives from the Army, Navy, and Air Force Surgeons’ General Offices is planning to develop a means of surveying beneficiaries about their MTF inpatient care. The group has begun by reviewing inpatient surveys currently used by MTFs and the civilian health care industry. Also, DOD recently eliminated a contract requirement that each managed care support contractor conduct its own annual beneficiary survey. Health Affairs officials told us they concluded that contractor-conducted surveys might lack the appearance of independence and were somewhat at odds with Health Affairs’ interest in standardizing surveys and reducing the survey burden on beneficiaries. Officials of Health Affairs, the services, and managed care support contractors are now discussing how best to obtain beneficiary views on network care using such a targeted survey. DOD documents, analyzes for trends, and reports on TRICARE beneficiaries’ complaints and compliments in differing ways throughout the MHS. All MHS levels, from DOD headquarters offices to TSO to MTFs and managed care support contractors, receive beneficiary-initiated feedback through such means as phone calls, letters, and personal visits. Like the private sector, DOD officials told us they use this information to identify problems and gauge performance of various MTF services. We obtained many examples of beneficiary-initiated complaints and other comments covering a host of issues. However, because beneficiary comments were not consistently documented, the examples we obtained cannot be viewed as representative of all TRICARE beneficiary-initiated feedback. Nevertheless, the examples do illustrate the types of issues military health care beneficiaries choose to raise. Detailed descriptions of feedback-related processes are provided in appendix III. Because neither DOD nor the services require MTFs to follow a standard procedure for tracking and reporting beneficiary comments, MTFs are free to establish their own feedback systems. As a result, the MTFs we visited have differing ways of obtaining, documenting, and analyzing beneficiary-initiated feedback. The MTFs also have different ways of reporting their feedback to MTF management and others within the facility. We also found, with few exceptions, that most reporting of feedback to entities outside MTFs is not done systematically. Lead agents also capture information on beneficiary-initiated concerns in varying ways. Each of the three lead agents we visited has systems according to which its MTFs and the regional managed care support contractor report certain TRICARE-related issues to the lead agent, including issues emanating from beneficiary comments. All three lead agents also track in some way the beneficiary feedback-related issues that they learn of. Lead agent officials told us that they consider the complaints they receive to be a valuable source of information about possible problems in their regions. None of the offices provide formal feedback-related issues reports to Health Affairs or the Surgeons General, although all have a variety of informal ways of reporting issues to them. Health Affairs, the Surgeons General, and TSO also receive beneficiary feedback and have their own procedures for handling it. These offices maintain tracking systems for the beneficiary feedback they receive, but these systems primarily track who is responsible for handling the case and response timeliness, not the specific categories the beneficiary comments fall into. Staff in these offices told us that they use the complaints they receive as indicators of possible TRICARE problems. Representatives of both of the managed care support contractors we contacted told us that they extensively track complaints and use them to identify system problems, and that their TRICARE tracking systems mirror the systems they use for their commercial health plans. While the managed care support contracts require periodic reports that include beneficiary feedback volume and response timeliness, DOD does not require the contractors to report their complaint tracking results to the government. Yet, managed care support contractor officials told us that they consider systematically tracked beneficiary feedback and rigorous analysis of the root causes of members’ complaints to be hallmarks of a customer service-oriented managed care plan. Officials at the various MHS organizations we visited told us how their complaint tracking procedures have led to problem identification and elimination. For example, one MTF’s deputy commander told us that he saw an increase in “staff attitude” complaints from patients at his facility. In response, he required all facility staff to take customer service training. In another case, lead agent officials told us how their tracking of complaints indicated that TRICARE Prime enrollees were being required to drive more than an hour for an MTF’s specialty care, though this exceeded the TRICARE requirement. The lead agent staff found that driving time to the MTF routinely exceeded 1 hour because of heavy traffic in parts of the MTF’s catchment area. As a result, the staff arranged for beneficiaries in those areas to go instead to closer network providers. Further, one contractor learned through complaints that civilian providers were referring beneficiaries to collection agencies because of unpaid bills. The contractor identified a number of problems caused by beneficiary and provider mistakes, which led to improved beneficiary and provider education efforts. This investigation also identified a DOD policy that was causing claims to be inappropriately denied. When a beneficiary needs medical care that cannot be provided at an MTF, the facility can complete a “nonavailability statement” certifying that the facility does not have the required resources to provide the care needed and authorizing the beneficiary to receive the care from a civilian provider. The contractor’s investigation found that when the computer record erroneously showed that a nonavailability statement had not been issued, DOD’s policy was to not accept a paper copy of such a statement. The contractor called this problem to the attention of DOD officials, and the policy was changed. DOD officials at MTFs and other offices, contractor officials, and a beneficiary organization’s representatives provided us with more than 2,600 examples of military health care beneficiary complaints and compliments. The comments covered a wide range of areas, including health care and administrative service quality, cost issues, and access to care. Because of the sample comments’ many forms, it is not possible to generalize across the system or to draw conclusions about comment frequency, the full range of categories that complaints or other comments may fall into, the number of comments in any particular category, how types of comments vary over time, or how complaints were resolved. Nonetheless, the following sample comments illustrate the types of concerns and favorable comments that DOD health care beneficiaries have expressed. Examples of complaints about MTF quality of care or services included the following: An MTF doctor unfamiliar with how to prescribe a drug gave a patient incorrect instructions on how often to take the medicine. The patient’s mother caught the mistake and confirmed it by calling the MTF pharmacy. The daughter of a retired military member who was admitted to an MTF for cancer treatment complained that her father was not well cared for. In particular, she complained that his clothes were soiled but no one had cleaned him. Upon inquiry, MTF staff told family members where they could get supplies to clean him themselves. The daughter also complained that she had found his intravenous bag empty and blood in the tubing, and that the staff had acted as if this were “no big deal.” Sample complaints about the quality of care or services provided by managed care support contractors follow: A patient with a previously abnormal mammogram was told by her surgeon that a 6-month follow-up mammogram was necessary. She complained that although she discussed the need for follow-up with her network primary care manager (PCM), the PCM delayed making a referral.The patient later switched PCMs and got the referral, although the test was set for 10 rather than the prescribed 6 months after the first test. A mother complained that the scale her network pediatrician used to weigh her newborn daughter was faulty. This led to an inadequate assessment of the infant’s weight and, subsequently, the need to hospitalize the child for severe dehydration. Complaints about MTF access to care included the following: A patient drove for 3 hours to a 1:00 p.m. MTF appointment for a diagnostic procedure. Upon arriving, he was told his appointment was scheduled for 3:00 p.m. but he would probably not be seen until 4:00 p.m. The patient had not eaten anything for 36 hours—as the procedure required—and now had to wait another 3 hours. He said that his requests for an explanation were not met and that the clinic staff were not attentive to his complaint. A managed care support contractor’s letter to a lead agent described two incidents in which patients complained to the contractor about inappropriate MTF emergency care delays. In the first case, a woman with a serious medical problem called an MTF emergency room but was told to call the managed care support contractor’s health care information line. The information line nurse, however, told her to go immediately to the emergency room. In the second case, an active duty member who had gone directly to an MTF emergency room was turned away because he had not first called the health care information line. When he called, the nurse said he should return to the emergency room for treatment. Following are complaints about access to care in contractors’ networks: When enrolling in TRICARE Prime, a beneficiary chose a gynecologist as her PCM only to find that the doctor, misidentified in the network listing, was a pediatrician. She reported that, as a result, she spent an entire day trying to arrange an appointment with the wrong doctor. After several phone calls and letters, she received a new TRICARE card that still listed the pediatrician as her PCM. A beneficiary tried in vain to find a TRICARE network provider in her area to treat her swollen knee. On her first call to the contractor’s toll-free number, she was given four doctors’ numbers; two of the numbers had been disconnected, one belonged to a doctor not accepting TRICARE Standard patients, and one was for a hospital emergency room. The patient tried the toll-free number again and got two more numbers, but neither doctor was working that day (Friday). On her third try, she was given six more doctors’ names, but only two came with phone numbers. She was told to look up the other four in the phone book, but none were listed. Of the two phone numbers she received, one was invalid and the other proved to be that of a pediatrician. Thus, after 2-1/2 hours of unsuccessful attempts to find a doctor, she called an MTF she previously had not been able to get through to and was given an appointment that same day. Examples of complaints related to TRICARE costs and other financial issues follow: A TRICARE Prime enrollee referred by her MTF to a civilian specialist complained that the doctor told her the reimbursement from the managed care support contractor was “not sufficient to perform the surgery [or cover] the cost of supplies.” A TRICARE Prime enrollee referred by his civilian PCM to a civilian specialist began to receive bills for the care. The managed care support contractor told the enrollee that the civilian doctor was using an incorrect identification number and that the doctor should resubmit the claim. The enrollee then received a second bill and was told that the visit was being treated as a point-of-service claim (which would require the patient to pay a large part of the bill), even though his PCM had properly referred him. He was later told to disregard the second bill. Complaints concerning both access to care and quality of administrative services included the following: A father was to be contacted within 5 days by an MTF radiology clinic with an appointment time for his child’s procedure. When he was not called, he went to the clinic and was told that “things happen.” He found this response and the lack of an apology to be “rude and uncaring.” Subsequently, when he and the child arrived for the appointment, it had to be cancelled because the child had eaten too recently, although they had not been told of the need to fast before the procedure. We also obtained the following examples of favorable comments about both the direct care system and contractor functions: One MTF kept a log of all patients’ comments. The list included compliments about the friendliness, compassion, professionalism, and technical skill of specific staff members, as well as general compliments about, for example, the speed of access to care or the clinic staff in general. A beneficiary had 6 months of claims processing problems that she described as “a nightmare.” She wrote to the managed care support contractor thanking a specific contractor staff member for resolving her problem. In a letter to a managed care support contractor, an Air Force chief master sergeant complimented staff at the local contractor office. He wrote: “Their enthusiasm and sincerity is definitely the right attitude needed to administer a program that has had the military ‘rank and file’ feeling a little uncomfortable.” DOD’s efforts to track beneficiary feedback resemble those of the private sector, but opportunities for improvement exist. Private health care managers make extensive use of customer feedback from surveys and rigorous customer complaint tracking and reporting. While DOD’s current survey efforts and emphasis on addressing beneficiary complaints at the local level are not unlike private practices, additional targeted surveys and more consistent complaint tracking and reporting would better inform DOD managers about beneficiaries’ experiences and more closely reflect private sector approaches to managing such information. Enhancing its current feedback efforts would also help DOD achieve its goal of bringing about a more outcomes-oriented TRICARE health system. Yet, given that the MHS differs in key ways from private sector health care systems, DOD would need to consider several basic cost and implementation issues to improve its beneficiary feedback. Customer surveys are a common private sector health care feature. Health plan officials told us they survey plan members to gauge overall satisfaction and conduct targeted user surveys to measure performance in particular areas. One large managed care plan conducts an overall member survey, a survey of members who have recently received health care services under the plan, and surveys targeted toward patients’ perceptions of their doctors. Health care providers also use customer satisfaction surveys. Officials at a hospital system that we contacted told us that every patient is asked to fill out a survey after receiving care in one of the system’s facilities. Survey results are also reported extensively throughout the private organizations we contacted. Officials told us that the results are used to identify problem areas, measure overall performance, and compare the performance of different parts of the organization. Officials at one managed care organization told us they report survey results to both senior managers and staff throughout the organization. These officials also provide special reports on results in particular areas when departments request them. The hospital system we contacted reports all patient comments, including patient questionnaire responses, to the head of hospital operations on a daily basis, while managers across the organization receive quarterly reports. In another case, several employers that came together to purchase health care as a group identified extensive beneficiary surveying as a key measure of their system’s performance. For example, the group reports customer satisfaction information from surveys to inform beneficiaries when they are choosing providers. The group also contracts for targeted surveys of particular covered populations, including surveys focused on the health status of children and seniors. Surveys are also central to the accreditation of managed care organizations. Both NCQA and JCAHO have accreditation programs that require managed care plans that are seeking accreditation to conduct member surveys. Health care purchasers, regulators, and consumers use the results of the accreditation process to assess all aspects of a plan’s delivery systems: physicians, hospitals, other providers, and administrative services. A survey is also a requirement of the latest version of the Health Plan Employer Data and Information Set (HEDIS). HEDIS is a set of standardized performance measures of health care plans’ performance. HEDIS is designed to provide purchasers and consumers with the information they need to reliably compare managed care plans’ performance. To become part of the HEDIS database, health plans must use NCQA’s Member Satisfaction Survey and be prepared to report the full set of survey results. NCQA makes consolidated results available to consumers for use in selecting among health plans. Private health care managers also extensively track customer complaints and use them to make system improvements. A large HMO’s member services director, for example, told us that members’ complaints and other comments, whether received in person, over the phone, or in writing, are tracked by computer. The purpose is to resolve members’ problems, identify root causes, and eliminate system flaws. Patient feedback tracking system reports are generated monthly and sent to staff throughout the system, including the Quality Assurance/Quality Improvement Committee. The hospital system we examined also uses a computer system to track all complaints, whether received in person, over the phone, in writing, or in response to a patient satisfaction survey. Complaints are sent to the senior staff member of the hospital area that the complaint concerns. All patients’ complaints are reported daily to the system’s hospital operations’ vice president, and every quarter to system managers. A senior official told us that complaints are useful for identifying both one-time and systemwide problems. He explained, for example, how patients had complained about giving the same information to different people during the admitting process, which led to the elimination of this redundancy. Representatives of one California hospital reported that analyzing patient complaints has become the hospital’s least expensive, most accurate method for understanding patients’ perspectives on what needs improvement at the hospital. When facility staff realized that individual complaints had been addressed in the past, but with little documentation or tracking, they designed a comprehensive complaint process that included procedures for capturing all complaints, responding to complaints quickly, measuring complaint severity, analyzing trends to uncover root causes of customer dissatisfaction, and identifying and implementing system changes to prevent future recurrences. The officials also reported that questionnaire surveys are not appropriate for capturing dissatisfied patients’ spontaneous complaints. Employers who purchase managed care coverage for their employees also see the value of tracking customer complaints. For example, the HMO Performance Standards set by one large employer state that its selected plan “shall track and report to the number and types of plan aggregate written and verbal complaints received by the HMO.” The standards require an annual report that lists complaints by categories “including but not limited to access, clinical services, providers, pharmacy, mental health/substance abuse, claims, and reception services.” To obtain accreditation by NCQA and JCAHO as a managed care organization, managed care plans must obtain and use member feedback. Plans are required to track, report, and use customer complaints to identify and address one-time and systemic problems. NCQA standards require that customer feedback analysis include aggregating results; noting trends in results over time; and identifying reasons for the results, such as the causes of dissatisfaction in particular areas. The standards also discuss how managed care organizations should use feedback analysis results to prioritize improvement areas on the basis of their significance to members. NCQA officials told us that no one system is prescribed for managing member complaints. Rather, NCQA surveyors look at a sample of complaints, determine if a system for handling them exists, and decide if the plan is following its own system. Similarly, JCAHO network accreditation standards require health plans seeking accreditation to have customer complaint receipt and management systems. The extensive use of customer feedback is not just a private sector health care feature; it exists throughout the private sector. A report of the Vice President’s National Performance Review describes extensive customer complaint and survey use by “best-in-business” companies and the applicability of these practices to government. It also refers to Executive Order 12862, which directs federal agencies to perform customer surveys, make complaint systems easily accessible, provide the means to address customer complaints, and measure customer service against the best-in-business. The report also describes customer feedback strategies used by best-in-business companies including facilitating customer complaints through the extensive use of centralized customer help lines, 1-800 numbers, point-of-service complaint or comment cards, and easy-to-use customer appeal processes; encouraging quick responses to customer complaints; using computers to centrally track complaints at the headquarters level; reporting tracking results widely, including to top management; and using the results to identify dissatisfaction trends and root causes to target core processes that need improvement. Some DOD efforts to track and use beneficiary feedback compare favorably with private sector efforts. For example, DOD’s beneficiary surveys are similar to private health plan and hospital surveys. Also, MTFs and other DOD offices use complaints to help identify problems, as is done in the private sector. But, in our view, DOD could make its current efforts more complete and systematic—and thus more effective. DOD’s current beneficiary surveys provide a view of beneficiaries’ satisfaction with their care generally and their MTF outpatient care specifically. However, adding targeted surveys of beneficiaries’ satisfaction with MTF inpatient care and TRICARE civilian network care would enhance the usefulness of DOD’s survey data. By doing so, DOD decisionmakers would have a more complete picture of TRICARE’s customer satisfaction. DOD could also obtain more detailed information about beneficiary- initiated complaints and other comments if it standardized the way it tracks and reports this feedback across the system. Currently, no systemwide approach to tracking and reporting exists. As a result, a serious problem that is surfaced by a complaint in one region or at one MTF, for example, can remain unnoticed in other locales if no one there complains. Moreover, with a consistent approach to tracking and reporting feedback, MHS and contractor personnel could put the complaints they receive into a systemwide perspective, even if they were tracking complaints locally. Further, with standardized tracking and reporting, personnel throughout the MHS could identify trends beyond those at their own location. They would also know the overall complaint volume by type and would probably find that the problems they were seeing had already surfaced and been addressed elsewhere, potentially saving time and resources otherwise spent on reinventing the solutions. With regular access to systematically tracked and reported complaint data, senior DOD officials could analyze complaint activity across the system, spot trends, and identify possible problems using data currently unavailable to them. Consistent complaint data would also equip senior officials with another tool for evaluating individual MTF performance and making cross-system comparisons. Standardizing feedback tracking and reporting would also enable DOD to better judge TRICARE’s contractor performance. DOD officials are now working to make future TRICARE contracts less prescriptive in nature and more outcomes based. Past contracts have offered bidding contractors little or no opportunity to use their best commercial practices to introduce innovation or reduce costs to accomplish DOD’s goals. For the new contracts, DOD proposes to set forth its overall objectives, such as maintaining customer satisfaction, and provide a mandatory requirements list. Deciding on an approach to satisfy the objectives and other requirements will be left to the bidders. In addition, DOD currently plans to use its annual survey and monthly MTF outpatient survey results as program success measures. By adding the other two surveys, DOD decisionmakers could focus more closely on MTF inpatient and civilian network performance and use the level of consequent beneficiary satisfaction as a key performance indicator. DOD officials could be confident that beneficiary complaints were being systematically categorized and reported so that such data could be used as a measure of the performance of managed care support contractors, MTFs, and TRICARE overall. DOD’s multifaceted MHS role, DOD’s relationship with its managed care support contractors, and the unique chains of authority involved in the roles of the three services in delivering military health care differ from the structure of private sector health care. These differences mean that DOD’s feedback tracking and reporting is more involved than the private sector’s and that civilian standards for this activity are not necessarily easily applicable to the MHS, though the principles driving them apply to all managed care environments, including TRICARE. Typically, private employers purchase health care coverage for their employees (or individuals purchase it directly) from health plans, which contract with doctors and hospitals to provide covered beneficiaries’ care. DOD operates differently. As the beneficiaries’ employer, it both administers TRICARE and directly provides much of the MHS’ health care through the hundreds of hospitals and outpatient clinics that it operates. Because of DOD’s merged responsibilities, which are usually held by separate entities in the private sector, the checks and balances that exist in civilian business relationships do not exist. For example, a civilian employer that receives numerous complaints about a hospital in the health plan’s network can insist that the plan either drop the hospital or lose the employer’s business. But, should an MTF receive such complaints, DOD’s options would be more limited. Differences among civilian health care purchasers, plans, and providers are, for the most part, clear cut. In DOD, however, TRICARE is a single health plan operated by two separate entities—the direct care system (MTFs) and the managed care support contractors—each responsible for managing program parts and providing, or arranging for, health care services. Also, the contractors’ role overlaps that of the direct care system, with some patients getting their care directly from DOD, others using the contractor networks, others using non-network civilian providers, and still others using some combination of the sources. Both DOD’s hospitals and DOD’s contractors send patients to each other for some care, but neither has real financial or other authority to control what the other does. Because of the shared care administration and delivery responsibilities, beneficiary-reported problems can appear to each party to be the other’s responsibility. The role of the three services also distinguishes military from civilian health care. While Health Affairs is responsible for running TRICARE, the MTFs are under the authority of the Army, Navy, and Air Force Surgeons General. And the regional lead agents, which also respond to direction from Health Affairs, cannot direct the activity of the MTFs in their regions but, instead, must rely on the MTFs’ cooperation to implement such new programs as regionwide complaint tracking and reporting. Moreover, neither Health Affairs nor the services can make changes in areas beyond their authority, including changes needed to address problems that surface through beneficiary feedback. Currently, NCQA requires that a managed care plan seeking accreditation have a single entity that is responsible for the entire plan. An NCQA official told us that because TRICARE uses various sources of care and various entities are responsible for seeing that care is properly delivered, TRICARE has no single accountable entity to examine. Instead, multiple accountability lines exist and, with them, the potential for beneficiary-raised issues to go unaddressed by any responsible organization. Notwithstanding the beneficiary feedback implications, the accountable entity issue could take on greater importance should DOD seek managed care plan accreditation for TRICARE in the future, as DOD officials have told us it may. Within its health care system’s unique context, DOD would need to explore several basic issues to improve beneficiary feedback. The cost of adding surveys and developing a single approach to handling beneficiary complaints would need to be weighed against the benefits sought. Also, DOD would need to decide how reporting the results of complaint tracking should work to ensure that information flowed to the appropriate organization and levels. Regarding a single complaint tracking system, DOD, private sector, and managed care support contractor representatives told us that care should be taken to ensure that such a system not become overly cumbersome or bureaucratic. Managed care support contractor representatives told us such a system should be collaboratively developed with them, flexible and adaptable to decisionmakers’ changing needs, and not overly prescriptive. They also pointed out that contract-prescribed items are difficult to change because of the time-consuming contract change order process and asked, therefore, that their contracts not prescribe how they should develop such a system. Also, they told us that such a system could be composed of tracking systems that were regional in scope and designed to encourage strong DOD/contractor partnerships. DOD would also, in our view, need to weigh potential training and other costs of adapting existing MTF and other DOD office beneficiary feedback recording systems. The costs of changing local systems would probably vary from place to place. Locations already capturing a great deal of beneficiary-initiated feedback data would probably find a standardized approach comparatively easier to adopt than those beginning the process for the first time. Also, DOD would need to consider how to report issues to address the MHS’ multiple lines of authority. Because the services control their respective MTFs, their chains of command would be prospective report recipients. In addition, reporting protocols could include the contractors and DOD contracting officers residing at lead agents and at TSO. Finally, because Health Affairs has overall TRICARE responsibility, it would also logically receive summary feedback, because such information is designed to point up systemwide problems. DOD is spending a great deal of money to improve its $15 billion-per-year health care program by implementing TRICARE. An investment of this magnitude heightens the importance of current, accurate, and complete information about how beneficiaries are reacting to and coping with the change. The beneficiary feedback currently available to DOD managers provides useful information about aspects of TRICARE’s performance and possible problem areas. If DOD were to make its current survey efforts more complete and to consistently record and aggregate complaint information across the system, DOD managers would have more valuable information with which to measure TRICARE’s success and identify and eliminate recurring, systemic problems. Enhanced feedback would also help DOD make the outcomes-based assessments it seeks for future TRICARE contracts. DOD could improve its beneficiary feedback information by conducting a civilian network care survey comparable to its monthly MTF outpatient visit survey, a possibility that is now under discussion. Also, while DOD does not currently have an MTF inpatient care survey, we support DOD’s plans to develop and conduct such a survey. DOD could also benefit by working with the TRICARE contractors to begin restructuring its complaint tracking and reporting systems to more closely parallel private sector managed care practices by consistently recording and aggregating complaint data across the DOD health care system. To position DOD to obtain and make better use of beneficiary feedback, both now and in the future, the Secretary of Defense should direct the Assistant Secretary of Defense (Health Affairs) to follow through in weighing the costs and benefits associated with civilian network and MTF inpatient care surveys that are comparable to DOD’s current monthly MTF outpatient survey and, as appropriate, implement these surveys and collaborate with the TRICARE contractors to identify options for, and weight the costs and benefits of, achieving consistency in recording beneficiary complaints, analyzing trends, and reporting beneficiary complaints and, as appropriate, implement the most practical, financially prudent approach. In its written comments on a draft of this report, DOD agreed with our recommendations regarding MTF inpatient and civilian network care surveys and a consistent beneficiary complaint tracking and reporting process. DOD added that the Army, Navy, and Air Force are now in various stages of reviewing their TRICARE customer relations approaches and assessing their beneficiary complaint processes. DOD also suggested that beneficiary complaint tracking is currently done at the lead agent level. However, at the lead agents visited, we found that beneficiary feedback systems varied markedly, as did the amounts and types of complaint data routinely captured. Also, in line with our suggestion that Health Affairs would be a logical recipient of beneficiary feedback data designed to point up systemwide problems, DOD stated it is exploring a centralized process for tracking beneficiary complaints at the Health Affairs level. DOD also suggested technical report changes, which we incorporated as appropriate. The full text of DOD’s comments is included as appendix IV. We are sending copies of this report to the Secretary of Defense and will make copies available to others upon request. Please contact me at (202) 512-7101 or Dan Brier, Assistant Director, at (202) 512-6803 if you or your staff have any questions concerning this report. Other GAO staff who made contributions to this report are David Lewis, Evaluator-in-Charge; Linda Lootens, Senior Evaluator; and Paul Wright, Evaluator. To identify Department of Defense (DOD) efforts to solicit beneficiary feedback through surveys, we interviewed officials of Health Affairs. We also obtained and reviewed documentation, including survey instruments, relating to Health Affairs surveys that included elements of TRICARE beneficiary satisfaction, as well as documents related to other Health Affairs surveys. Through discussion with Health Affairs officials, we determined that three DOD surveys fell within the scope of this review: the Health Care Survey of DOD Beneficiaries (1994-95 and 1996) (the annual survey), the Customer Satisfaction Survey (April/May/June 1997) (the Military Treatment Facility [MTF] outpatient survey), and the TRICARE Prime Enrollee Satisfaction Study (1996). We obtained DOD reports of these three surveys’ results but did not independently assess the survey instruments’ statistical validity or reliability. In this regard, the DOD official responsible for the Health Affairs survey efforts told us that DOD uses experienced contractors to design and conduct its surveys and that survey questions are based on standard survey questions extensively pretested for validity and reliability by the private sector, and widely used in their surveys. Further, he believes DOD’s rigorous methods for sampling survey populations and weighing survey responses on the basis of numerous proven variables result in statistically valid survey data. DOD survey yield rates are similar to the average 50-percent yield rate for private sector surveys. The annual survey yield rate has been about 60 to 65 percent, and the MTF outpatient survey yield rate has been about 45 percent; both rates have been increasing over time. We interviewed and obtained documents from DOD officials and contractor representatives across the Military Health System (MHS) regarding policies and procedures for documenting, determining trends in, and reporting beneficiary-initiated complaints and compliments. At the DOD headquarters level, we met with Health Affairs officials to discuss tracking beneficiary feedback within Health Affairs. We also reviewed TRICARE Support Office (TSO) requirements for how managed care support contractors are to track and report feedback from beneficiaries and interviewed TSO officials about how they use the beneficiary comments that they receive. In addition, we interviewed representatives of the Army, Navy, and Air Force Surgeon General and Inspector General offices about how their organizations receive and handle beneficiary feedback. We also discussed with all of these officials the means by which they exchange information on feedback-related issues with other MHS locations. Wilford Hall Medical Center, Lackland Air Force Base, Texas 12th Medical Group Clinic, Randolph Air Force Base, Texas 61st Medical Squadron Clinic, Los Angeles Air Force Base, California We interviewed lead agent and MTF officials at these locations about how they track and report beneficiary comments and obtained documents related to these feedback tracking processes, including comment database formats and summary reports, comment tracking log sheets, complaint/comment forms, and procedures governing beneficiary feedback tracking and reporting. We also interviewed representatives of two managed care support contractors—Foundation Health Federal Services and Humana Military Healthcare Services—in their headquarters and regional offices and at local contractor offices located in or near the MTFs we visited. We discussed contractors’ feedback tracking and reporting processes, both as they fulfilled DOD requirements and as they met the contractors’ own internal purposes. We also obtained documentation of the contractors’ beneficiary tracking and reporting systems. Although DOD’s contracts require the managed care support contractors to have mechanisms in place for beneficiaries to appeal managed care decisions, we did not examine the appeals process as part of this review. We collected over 2,600 examples of beneficiary-initiated complaints and compliments from lead agents, MTFs, and managed care support contractor officials in the three TRICARE regions we visited as well as from the National Military Family Association, a beneficiary group. For this report, we judgmentally selected example comments to identify the types of issues that beneficiaries raised. However, because of the variability of DOD’s recording of beneficiary comments, we could not determine the range, magnitude, or frequency of beneficiary comments, and we did not review the validity of complaints or how complaints were resolved by the military or contractor organizations that received them. Because the methods by which beneficiary-initiated comments were documented varied, the set of example complaints and compliments we obtained is not representative of beneficiary comments from either the locations we visited or the MHS as a whole. In some cases, the documentation we reviewed provided only what the beneficiary said; in other cases, particularly in the case of complaints, the documentation also included information about how the complaint was handled. In other cases, the documentation consisted only of brief database entries made by staff of the organization that handled the complaint. We were also told that some complaints and compliments were not recorded in any way. We did not assess the validity of the beneficiary concerns. However, we noted that in some cases the complaint files included information indicating that the MTF found the complaint to be invalid. For example, a patient who wanted to see a specialist not in the contractor’s network disenrolled from TRICARE Prime in order to avoid paying the substantial cost required of TRICARE Prime enrollees for out-of-network care. But the patient received care from the specialist before the effective date of disenrollment, so the patient was billed the high fee. The patient complained about the bill, but the documentation indicated that the mistake was the patient’s, not the MTF’s or the contractor’s. Another patient complained about being denied care when she could not get an ultrasound test early in her pregnancy. However, her doctor told the MTF staff researching the complaint that the test she wanted was not medically necessary. Although we did not review whether or how DOD resolved the beneficiary concerns in the example complaints we obtained, we noted that in some cases the available complaint documentation explained DOD or contractor efforts to research and resolve the complaints. There were cases, for example, in which documentation indicated that MTF or contractor staff called the appointment telephone line to test the quality of the service it provided after a beneficiary complained about being left on hold or being given an appointment date weeks or months in the future. According to the case files, the appointment line employees were typically able to set up acceptable appointments for the beneficiaries immediately. Documentation also showed that complaints about inattentive staff in MTF inpatient settings apparently led to special training on the importance of being responsive to patient requests. To compare DOD approaches to beneficiary feedback with those of the private sector, we interviewed representatives from two health care industry accreditation bodies—the Joint Commission on the Accreditation of Healthcare Organizations (JCAHO) and the National Committee for Quality Assurance—and obtained and reviewed copies of their accreditation standards regarding customer surveys and handling customer comments. We also reviewed the Vice President’s National Performance Review report describing the use of customer complaints by successful companies throughout the private sector and the applicability of such practices to government agencies. Further, we discussed customer surveys and comment tracking with representatives of two private sector health care providers—Kaiser Permanente, a large commercial health maintenance organization, and Inova Health System, a Northern Virginia hospital chain—and obtained documents describing the methods these companies use to track, categorize, and report comments from their customers. Private sector health care accreditation organizations require plans to have procedures for handling appeals of health care decisions, though we did not examine these appeals processes or compare them with those in place under TRICARE. The Health Care Survey of DOD Beneficiaries (referred to in this report as the annual survey) has six sections: Use and source of care. This section asks beneficiaries 22 questions about annual visits, nights spent in a hospital, care sources, and insurance coverage. Familiarity with benefits. This section contains 13 questions about whether beneficiaries have a source of information for various aspects of their health care benefit. Health status. This section contains 36 questions, widely used and validated in the private sector, that measure distinct aspects of physical and emotional health. Access to care. This section contains 25 questions that look at how easily beneficiaries enter the health care system (process measures) and whether they receive necessary care (outcome measures). Satisfaction with care. This section contains 54 questions about overall satisfaction with care received at military and civilian facilities, and satisfaction with specific aspects of the care. Demographic information. This section asks about age, education, gender, ethnicity and race, beneficiary group, and length of time in residence as well as other factors important to explaining health-related behaviors and opinions. The annual survey was designed by a working group composed of survey experts from Health Affairs, each of the three services, and a representative from the Defense Manpower Data Center. The questions and scales used in the annual survey were developed on the basis of a review of private sector surveys that had been extensively tested for reliability and validity. The survey is mailed to a random sample of beneficiaries selected from catchment areas in the United States, overseas, and in noncatchment areas. The 1996 annual survey was mailed to a sample population of 156,838 adult beneficiaries eligible for MHS health care. The survey sample was composed of the following beneficiary types: active duty, active duty family members, retirees under age 65, retirees aged 65 or older, retiree family members under age 65, and retiree family members aged 65 or older. Beneficiaries were included in the sample regardless of whether they were users of military health care—either MTF care or DOD-funded civilian care. Health Affairs has conducted the annual survey three times, at about 16- to 18-month intervals. The first survey was conducted in late 1994 and early 1995. Because it was conducted just before TRICARE started, it established a baseline against which changes in beneficiaries’ ratings of their health care could be tracked following TRICARE’s implementation. Questions on TRICARE Prime were added to the 1996 and the 1997 survey instruments to (1) gauge how beneficiaries perceive the program and (2) compare responses of beneficiaries enrolled in TRICARE Prime and those who are not. Health Affairs sends out several reports of the annual survey results. Each TRICARE region receives one report that contains that region’s results by catchment area and by beneficiary group. Health Affairs sends each regional report to the lead agent, who is then responsible for distributing the results to the MTFs in that region. According to DOD officials, it is important to get the information to the local level where local officials can use the information to make improvements. Also, Health Affairs sends to each service Surgeon General a summary-level report that includes results for each of that service’s MTFs. Health Affairs uses annual survey results as measures, along with a wide variety of other measures, in its MHS Performance Report Cards and in its Annual Quality Management Reports. The report cards, which provide MTF commanders with data on their facility’s health care delivery performance, measure five areas: access, quality, utilization, health behaviors, and health status. Annual survey results that appear in the report card include three measures of beneficiary satisfaction: access to appointments, access to system resources, and quality. According to DOD officials, the report card is one way to convert certain annual survey results to a catchment area score. Annual Quality Management Reports are assessments of quality across the system and also use the annual survey results. DOD’s summary of its 1994-95 and 1996 annual survey results is broken out by different beneficiary types. One set of results consists of responses from active duty family members and a second, retirees and their family members. DOD officials told us that the summary they provided us does not include active duty personnel responses because the summary’s focus was on beneficiaries with a choice in where they obtain health care services, a choice that active duty personnel do not have. The summary data that DOD provided also distinguish between regions with TRICARE and those without. Regions with TRICARE are defined as those that had had TRICARE in place for a sufficiently long period at the time of the 1996 survey. The Customer Satisfaction Survey (referred to in this report as the MTF outpatient survey) measures patient satisfaction with the effectiveness and efficiency of a recent, specified MTF outpatient visit. According to Health Affairs officials, this survey is intended to provide MTF Commanders and headquarters levels with quick, frequent, civilian-benchmarked feedback on the satisfaction of beneficiaries with their visits to MTF outpatient clinics. The survey asks about the patients’ satisfaction with their experience both in obtaining the appointment and during the appointment. According to DOD officials, this systemwide survey will replace most of the ad hoc surveys currently being done locally at MTFs. DOD officials said that a mail survey of this type is more reliable than surveys handed out to patients in the MTF clinics. DOD contracted with a health services research organization to design and conduct the MTF outpatient survey. DOD’s contractor mails out surveys each month to patients who received outpatient care at clinics that have more than 200 outpatient visits per month. Over the course of each year, the survey will be mailed to 200 patients at each of about 2,100 clinics. The actual number of surveys mailed for April 1997 appointments was 52,642. Each month, MTFs forward patient appointment data to the contractor, who prepares a random sample of names and mails questionnaires directly to the patients, 30 to 50 days after the appointment. The questionnaire is customized to the date, doctor, and clinic of the appointment; asks 17 multiple choice questions about the visit; and allows for written comments. The contractor sends these written comments directly to the MTF Commander, without analysis by the contractor. Patients mail the completed questionnaires directly to the contractor, who produces reports of that month’s results as well as each clinic’s average results for the past 3 months. Health Affairs distributes a number of different reports of the results of the monthly outpatient surveys. The contractor reports survey results at both MTF and individual clinic levels to MTFs on a monthly basis. These reports provide a “rolling” picture of the past 3 months’ data. The clinic-level report compares each clinic with itself during the previous reporting period as well as with other clinics within the MTF, peer clinics at other MTFs, and civilian HMOs. The MTF-level report compares each MTF with itself during the previous reporting period as well as with other MTFs within the same service, MHS-wide averages, and civilian HMOs. The contractor also prepares quarterly summary—“roll-up”—reports for lead agents, Surgeons General, other service command entities, and Health Affairs within 45 to 60 days of the end of each quarter. All of these reports are standardized and one page long; they report on customer satisfaction with access, quality, and staff interaction. Figures II.1, II.2, and II.3 show each region’s results and comparison scores for civilian HMOs in the same geographic areas. April/May/June 1997 was the first 3-month period for which survey results were available. During this period, the Central, Heartland, Mid-Atlantic, Northeast, and Pacific-Alaska regions did not yet have TRICARE. Figure II.1: Monthly MTF Outpatient Visit Survey Results for Satisfaction With Access Compared With Civilian HMO Benchmarks “Satisfaction with access” focuses on individuals’ satisfaction with referral for specialty care, access to medical care, office wait time, time to return phone calls, ease of making phone appointments, and appointment wait time. “Satisfaction with quality” focuses on individuals’ satisfaction with overall quality of care received, how well care met needs, thoroughness of treatment, how much the individual was helped, and explanations of procedures and tests. MTF Visit Satisfaction With Staff Interaction “Satisfaction with staff interaction” focuses on individuals’ satisfaction with personal interest in the patient, advice on ways to avoid illness/stay healthy, amount of time with doctor and staff, attention to what patients said, and friendliness and courtesy of staff. Health Affairs’ TRICARE Marketing Office commissioned a telephone survey of TRICARE Prime enrollees who were enrolled in the program on September 30, 1996. The survey consisted of 7,728 interviews conducted between October 18 and December 8, 1996, and covered five TRICARE regions: Golden Gate, Northwest, Pacific, Southern California, and Southwest. The survey addressed a number of issues related to enrollees’ understanding of TRICARE Prime, satisfaction, and reenrollment intentions. TRICARE Prime-specific questions from this survey have been incorporated into the ongoing annual surveys. Health Affairs also conducts other surveys to solicit beneficiary feedback on various topics unrelated to satisfaction with health care: The DOD Survey of Health Related Behaviors Among Military Personnel is carried out about every 3 years to collect worldwide data from active duty personnel on drug and alcohol abuse and other health-related behaviors. The Health Enrollment Assessment Review, a questionnaire completed by patients as they enroll in TRICARE Prime, is used to identify high-volume care users and their chronic conditions, assess the need for preventive services, and motivate behavioral change. The MHS User Survey is conducted twice each year to collect data on the health care sources of DOD’s U.S. beneficiaries for use in developing capitation budgets. DOD has also used focus groups to obtain beneficiary feedback on TRICARE’s success. From October to December 1995, DOD hosted a series of focus groups in the Southwest and Northwest regions to test beneficiaries’ knowledge of TRICARE at the time it was introduced in these regions and, thus, the success of its beneficiary education and marketing efforts. DOD officials told us the results of these focus groups helped establish a baseline of beneficiary perceptions of and attitudes toward the program to help in designing future TRICARE marketing efforts. In November 1996, Health Affairs issued a policy designed to standardize surveys across the MHS, ensure that all survey information is generalizable, allow comparisons with civilian plans, and minimize the time and paperwork burden on beneficiaries. In instituting this policy, Health Affairs intended to avoid surveys that produce invalid results and results that cannot be compared across MHS or with those of civilian health care plan surveys. According to the policy, entities under MHS authority—MTFs, offices of service Surgeons General, and managed care support contractors—must obtain approval from Health Affairs before conducting their own surveys. According to Health Affairs officials, however, MTFs and other entities can continue to gather information from beneficiaries as long as they use open-ended questions and do not attempt to generalize the results. In fact, Health Affairs officials told us that a feedback or complaint system that allows people to describe their concerns in their own words is a useful tool for MTFs to use to identify particular areas of concern to beneficiaries. Beneficiaries make complaints and give compliments directly to many offices throughout the MHS, using several different methods. Beneficiaries contact Health Affairs, TSO, the Surgeons General, Inspectors General, lead agents, and MTFs. And the managed care support contractors receive such feedback in their headquarters offices, regional offices, and local contractor offices. Beneficiaries also express concerns to associations representing beneficiaries’ interests. Beneficiaries communicate their concerns in a variety of ways. For example, beneficiaries communicate orally through phone calls and in person, as well as in written form through letters, electronic mail messages, faxes, and filling out comment forms at MTF clinics. One special category of letters received within MHS is priority correspondence—letters regarding beneficiary concerns referred from the White House, the Congress, the Secretary of Defense, or the three service Secretaries. DOD requires managed care support contractors to have a toll-free phone line for beneficiaries, and much of the feedback that the contractors receive comes in over these lines. Officials throughout DOD told us that they consider it important that complaints be resolved at as low a level as possible. They said that people who register dissatisfaction should not be “given the runaround” in the process of trying to find someone to listen to and deal with their complaint. This emphasis is consistent with the National Performance Review report on the importance of empowering front-line employees to provide “on-the-spot, just-in-time resolution of problems.” Each MTF we visited had procedures in place enabling beneficiaries to comment directly to MTF staff while at the facility. Much of this feedback is in the form of oral comments made directly to MTF staff members or through comment cards or forms beneficiaries fill out. MTF officials told us that they also receive comments through phone calls, letters, and electronic mail. The MTFs differed in their approaches to handling beneficiary comments. Some MTFs had designated personnel throughout the facility who served as patient representatives or patient advocates. These staff were tasked with receiving beneficiary comments about their own clinic or department. MTFs with patient representatives at this level also had a senior patient representative whose job was to be available to any beneficiaries with comments, whether concerning a particular facility area, the whole facility, or military health care in general. Other facilities did not have formally designated patient representatives at clinics or departments but, instead, had a single patient representative office where beneficiaries could go to make comments. Procedures for documenting beneficiary feedback also varied among the MTFs visited. For example, some MTFs entered everything the patient representatives received into a central patient feedback database, and some also kept hard copy documentation of the comments that came in. Another MTF had a system that required oral comments to be documented in writing. Staff kept hard copies of both those comments and the ones that came in through comment cards but did not enter the comments into a database. Another MTF did little or no documentation of oral or written beneficiary feedback. The head patient representative at that facility said that he did not have enough time to both handle patient concerns and prepare documentation, so he opted to spend time with patients instead of doing paperwork. Also, wide differences existed in how much the MTFs analyzed beneficiary feedback for trends. For example, some used the categorized patient feedback in their central database to prepare regular feedback trend reports. They analyzed how the number of complaints per type changed over time and which hospital areas were generating more complaints. Other MTFs did little or no formal trend analysis of beneficiary comments, although staff members at these facilities told us that they relied on their experience with feedback at the facility over time to notice trends. We also found variation in how the feedback tracking results were reported to MTF management or to others in the facility. For example, some MTFs distributed formal feedback reports on trends to senior MTF management, as well as reports about department-level feedback to supervisory staff in various areas of the facility. At another facility, however, internal reporting of patient feedback consisted of oral input from the head patient representative to a senior management committee, with no supporting documentation. MTF patient representatives told us that these systems constitute the formal structure that is in place to receive feedback, but that other avenues exist. For example, they said that beneficiaries can speak to staff members throughout the MTF if they have concerns and that many do. People can speak with their doctor or other staff members in the various clinics, or they can go to different parts of the MTF’s administrative structure, such as the managed care office or the MTF commander’s office. Even at MTFs with extensive feedback documentation and trend analysis systems, staff members noted that some of the feedback that comes in to staff other than patient representatives does not make it into the MTFs’ systems. For example, one officer in an MTF command section told us that he hears beneficiary complaints and handles them but does not typically report what he hears to the central MTF patient representative office that maintains a database of patient complaints. MTF officials told us that they do not systematically report most beneficiary feedback to Health Affairs or the service Surgeons General. Officials at MTFs and other MHS offices told us that MTF staff are expected to resolve problems that arise, whether identified through beneficiary complaints or not. Health Affairs and the service Surgeons General expect to be brought in only to handle issues that the MTF cannot. While such issues do get referred to the higher levels, the officials told us that information about problems solved locally normally do not. The exception was regular reporting of contractor-related issues to lead agents by MTFs. One exception to the lack of systematic reporting of beneficiary feedback is found in the Southern California Region. MTFs in that region are part of a program led by the lead agent to systematically report to the lead agent certain types of beneficiary comments. Lead agent officials told us that MTFs in the region have been asked to send to the lead agent the beneficiary complaints made to the MTF concerning the managed care support contractor. For example, if a beneficiary tells the patient representative about an enrollment card problem or a problem getting contractor network care, the MTF will send a copy of the complaint to the lead agent, where it will be centrally tracked, as well as notify the managed care support contractor of the problem. The regional managed care support contractor has also been asked to do the same for MTF-related complaints made to it. Lead agent officials told us that they hope to expand this project to include all regional complaints in the future. Further, MTFs have systems in place for documenting and reporting clinical health care quality issues, some of which come to light through patient complaints. To maintain JCAHO accreditation, MTFs must have systems in place to track clinical care quality issues. MTF officials told us that such complaints, along with other clinical quality issues identified at the facility, are documented and analyzed for trends and become the subject of detailed review by special committees as well as by MTF risk management and legal office staff. Some beneficiary concerns come directly to the lead agents through letters or phone calls, others come through oral or written reports from regional MTF staffs and the contractors, and still others are referred to the lead agent by other offices. The three lead agents we visited had issues tracking systems that tracked, among other things, concerns that came to light through complaints from beneficiaries. The Southeast Region lead agent maintained a central log of complaints that came directly to the lead agent as well as complaints forwarded to the lead agent by other DOD offices (including priority correspondence complaints); complaints about MTFs forwarded by the region’s managed care support contractor; and certain complaints received by MTFs in the region. Southeast Region officials told us that beneficiary complaints about managed care support contractor functions were frequently the subject of discussion during regular telephone meetings between contracting officer’s technical representatives (COTR) at the region’s MTFs and lead agent staff. Lead agent officials in the Southeast Region also told us that they used the system to track issues to ensure they were being properly addressed and resolved, but that they did not organize the issues by category or analyze for trends over time. In the Southern California Region, the lead agent had implemented a system specifically to track complaints. The system tracked complaints (1) received by MTFs in the region if they concerned the managed care support contractor, (2) received by the managed care support contractor if they concerned an MTF, and (3) received by the lead agent directly. The lead agent staff tracked and analyzed for trends the complaints in this system by category of complaint. Lead agent staff told us that they want to expand the system to include more types of complaints in the future. The Southwest Region lead agent asked the COTRs at the MTFs to perform a number of contractor oversight functions and to report the results monthly to the lead agent. Some of the issues that the COTR reports raised were related to beneficiary complaints. The lead agent staff then compiled the issues raised by the various monthly reports of COTRs into a single letter to the region’s managed care support contractor asking for issue-by-issue responses. Lead agent officials in the three regions we visited told us that they use the complaints that they receive to identify and proactively deal with issues before they become worse, as well as to monitor overall TRICARE performance in their region. Lead agent staff said that when their beneficiary complaint tracking indicates a possible problem, they discuss the issue with the managed care support contractor, MTF staff members, or both to help identify the cause and discuss possible solutions. Lead agent staff also said that by tracking complaints they are better able to identify the root causes of problems in ways that surveys are not, although surveys can, on the other hand, indicate how well DOD is fixing the problems identified through complaints. Lead agent officials told us they did not systematically report beneficiary feedback-related issues to Health Affairs or the Surgeons General. They said, however, that a number of regularly scheduled video, telephone, and face-to-face meetings take place with Health Affairs, service Surgeons General, and contractor staff and that at these meetings some issues discussed may have emanated from beneficiary comments. But, whether a particular issue is discussed at these meetings is generally the result of a decision made by an individual that the issue warrants the other participants’ attention. Some issues communicated to Health Affairs, the service Surgeons General, TSO, and the service Inspectors General come directly from beneficiaries through letters and phone calls. Others are referred through other means, such as priority correspondence, which is referred from congressional and other offices. Some of these complaints are from beneficiaries who have tried to get a problem handled at a lower level, such as an MTF, but were not satisfied. Others are from beneficiaries who simultaneously send their complaint letters to as many places as possible. Health Affairs, service Surgeons General, and TSO officials told us their organizations have their own tracking systems for beneficiary concerns that come to the attention of their respective offices. Health Affairs officials told us they enter all beneficiary feedback they receive—both directly sent and referred—into a tracking system that notes the receipt date, which staffer was assigned to handle the concern, the response due date, and a short issue description. Officials told us that the system’s purpose is to track response timeliness and not to track or establish trends in issues by category. Reports from the Health Affairs tracking system show that the system’s issue descriptions are not specific enough for tracking or identifying trends in issues by category. Service Surgeon General office officials described similar systems for tracking the timeliness of the offices’ responses to beneficiary feedback. Also, staff from the Navy told us that they had begun to track selected beneficiary concerns by type. TSO also tracks beneficiary issues. According to officials there, a large number of phone calls and letters come into that office and are centrally tracked in a computer database. However, officials also said that the categorization system they use puts issues only into broad categories—such as “claims” or “policy questions”—which limits the usefulness of the system for tracking issues by type. Beneficiaries can also register their complaints with the Army or Navy Inspector General. These Inspectors General deal mostly with misconduct allegations but, on occasion, they receive health care service-related complaints. Officials at Inspector General offices told us they track beneficiary concerns by nature of issue but report health care issues only on an ad hoc basis. One official at an Inspector General office told us that his office reports an issue to the service Surgeon General only when it appears significant and representative of a systemic issue. DOD requires contractors to document and report statistics on the nature and number of beneficiary contacts—including, but not limited to, beneficiary complaints—as well as on the contractors’ response times to beneficiary inquiries. For example, DOD requires monthly contractor reports to TSO on all phone calls received, local contractor offices’ walk-in traffic, and how long contractors take to respond to priority correspondence items. For walk-in activity and phone calls, DOD requires the reason for the person’s visit or call, but not identification of which calls or visits involved complaints from beneficiaries. That is, a reason category called “enrollment” would include calls or visits from beneficiaries who contacted the contractor service center to enroll in TRICARE Prime as well as those who expressed a complaint about some aspect of the enrollment process. Contractors are also required to report quality of health care issues that they handle—and their actions in response to the issues—to their lead agents. These quality of care issues include both potential quality issues and issues that the contractor determines to have already become quality of care problems. These issues may be reported by hospital staff; identified through review of quality of care indicators, such as incidents of post-operative infections; and raised by beneficiaries through complaints. In addition to the reports required by managed care support contracts, contractors also gather feedback-related information for their own use. One managed care support contractor’s representatives told us that the contractor categorizes all the complaints it receives, whether over the phone, through the mail, or in person. The contractor also analyzes the data to identify trends and reports the results throughout the organization, including to senior management. Another managed care support contractor’s representative told us his organization similarly tracks complaints received from beneficiaries through calls to the contractor’s toll-free telephone number, as well as complaints raised with the contractor’s field staff when they determine the complaints to be serious enough to warrant entering into the tracking system. The representatives told us that their beneficiary feedback tracking systems are similar to the systems used by their parent companies’ civilian health care operations. The contractors do not systematically report the results of their internal tracking to DOD, although issues that the contractors discover through their own systems may be discussed in ad hoc letters to DOD. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed: (1) whether the Department of Defense (DOD) solicits feedback from beneficiaries of its managed health care program, TRICARE, and, if so, how this is done and what the data show; (2) what other means are available to beneficiaries to provide feedback and what such beneficiary-initiated feedback could reveal about TRICARE's success; and (3) how DOD's approaches to obtaining feedback compare with the private sector's and whether opportunities exist to improve DOD's beneficiary feedback tracking and reporting. GAO noted that: (1) DOD obtains and uses TRICARE beneficiary feedback in several ways across the military health system (MHS); (2) DOD conducts a broad annual beneficiary questionnaire survey and a monthly survey of patients' perceptions of military treatment facilities (MTF) outpatient visits--both of which are based on private-sector models--to measure levels of satisfaction with TRICARE; (3) DOD reports the survey results throughout the MHS; (4) DOD does not conduct such surveys of MTF inpatient users or civilian network care users, though DOD officials told GAO that they are now planning to develop an MTF inpatient survey; (5) as TRICARE continues to be phased in across the MHS, DOD's annual surveys are indicating fairly levels of overall beneficiary satisfaction with the program, but lower satisfaction levels with aspects of military care; (6) DOD also tracks and reports beneficiary-initiated feedback--complaints and other comments--in ways that vary throughout the MHS; (7) a wide range exists in how much feedback information is tracked and in how the different levels of units that compose TRICARE--and other DOD offices--do the tracking; (8) beneficiary-initiated feedback reporting throughout the MHS varies as well; (9) because of the variability of DOD's recording of these data, reliably depicting the range, magnitude, or frequency of beneficiary feedback about TRICARE is not possible; (10) private health care managers rely extensively on beneficiary feedback; (11) surveys, which provide data about whole customer populations, and customer-initiated complaints, which show where specific problems have occurred, are used together as key tools to measure plan performance and identify systemic problems; (12) while no direct private-sector parallel to MHS exists, DOD's feedback efforts are somewhat similar to the private sector's, although adopting certain private practices might improve DOD's feedback systems; (13) more reliable beneficiary feedback data would also help DOD to make customer satisfaction an outcome measure in the next round of TRICARE contracts, which DOD is trying to base more on outcomes and less on process; and (14) to improve its beneficiary feedback approaches, DOD will need to consider a number of cost-benefit issues, the varying sophistication levels of beneficiary feedback management throughout MHS, and other matters. |
Meth is relatively easy and cheap to make today by individuals with little knowledge of chemistry or laboratory skills or equipment. PSE, an ingredient used in OTC and prescription cold and allergy medications, is the key substance needed to make the dextrorotatory methamphetamine (d-meth) illicitly produced in most domestic meth labs today. The difference between a PSE molecule and a d-meth molecule is a single oxygen atom. Meth cooks make d-meth by using common household products to remove this oxygen atom to produce meth as shown in figure 1. Meth cooks have used two primary processes known as the Nazi/Birch In recent years, meth cooks have and Red P methods to make d-meth.developed a variation of the Nazi/Birch method known as the One Pot or Shake and Bake method that produces meth in one step where ingredients are mixed together in a container such as a 2-liter plastic bottle. Another process for making meth is known as the P-2-P method, which produces a less potent form of meth known as racemic or dl- meth.half as potent as the d-meth made with PSE. Initial federal efforts to address a growing meth lab and abuse problem primarily focused on increasing meth-trafficking penalties and regulating the bulk importation, exportation, and distribution of meth precursor chemicals such as PSE. In 2004, Oklahoma was the first state to pass a law to control the retail sale of PSE products by requiring customers to present photo IDs and pharmacists to keep the product behind the counter and log all sales. By November 2005, over 30 other states had passed laws related to the control of the retail sale of PSE products. In 2006, the CMEA was enacted, which included measures designed to control the availability of meth precursor chemicals such as PSE by regulating the retail sale of OTC products containing these chemicals. The CMEA placed restrictions on the sale of these products, including (1) requiring these products to be kept behind the counter or in a locked cabinet where customers do not have direct access; (2) setting a daily sales limit of 3.6 grams and a monthly purchase limit of 9 grams per customer regardless of the number of transactions; and (3) requiring sellers to maintain a logbook, written or electronic, to record sales of these products and verify the identity of purchasers. The CMEA does not prohibit states from taking actions to establish stricter sales limits or further regulate the sale of PSE products. Since the passage of the CMEA, some states have implemented electronic systems to track sales of products containing PSE. Through these systems, retailers report sales of PSE products to a centralized database that can be used to determine whether individuals are exceeding the purchase limitations of the CMEA or state laws. Reported information typically includes the date and grams purchased, as well as the name, address, and other identifying information of the purchaser. Most tracking systems are stop sale systems that would query the database, notify the retailer whether the pending sale would violate federal or state purchase limitations, and deny sales where limits have already been reached. As of December 2012, 19 states were using stop sale tracking systems. Seventeen of these states were using a system called the NPLEx that is endorsed and funded by PSE manufacturers through CHPA.by another vendor. Two states were using systems developed in-house or Some states and localities have taken additional steps to regulate PSE sales. Oregon, Mississippi, and 63 Missouri cities or counties have passed laws or ordinances requiring individuals to obtain a prescription from a health care provider in order to purchase PSE products. While a prescription is required, an in-person encounter with a health care provider may not be necessary to obtain the prescription. There is no set limit to how much PSE can be prescribed. Both Oregon and Mississippi require that prescriptions for PSE products be entered into the states’ prescription drug monitoring program, a program that allows for pharmacists and prescribers to electronically look up how much PSE product has been prescribed to a patient. Figure 2 shows the states with prescription-only laws and ordinances and electronic tracking systems, including the dates these systems were implemented. According to DEA data on meth lab incidents, after peaking in 2004, the number of lab incidents nationwide declined through 2007 after the implementation of state and federal regulations on PSE product sales. As shown in figure 3, the number of lab incidents peaked in 2004, with states reporting over 24,000 lab incidents nationally. However, beginning in 2005, the number of incidents began to decline sharply and reached a low of about 7,000 incidents in 2007. While there may be multiple factors at work that resulted in this decline such as region-specific factors, federal, state, and local law enforcement officials attribute the primary cause of the decline to the restrictions on purchases of PSE products imposed at both the federal and state levels from 2004 through 2006. The impact of these restrictions was to reduce the accessibility of PSE for use in illicit meth labs, which in turn resulted in fewer labs during this period. After reaching a low in 2007, the number of meth lab incidents reported nationally increased over the next few years. National trends show that meth lab incidents have increased since 2007, reaching more than 15,000 at the end of 2010 –more than double the number of reported incidents for 2007. Federal, state, and local law enforcement officials attribute this rising trend primarily to two factors: The emergence of a new technique for smaller-scale production. A production method popularly called the One Pot method, which simplified the entire meth production process down to a single 2 liter plastic bottle and enhanced the ability of individuals to make their own meth, began to emerge in 2007. With this method, meth addicts are capable of manufacturing their own meth quicker and with less PSE, chemicals, and equipment than required by traditional meth- manufacturing methods, although this method also produces less meth than the traditional manufacturing methods. According to DEA data, more than 87 percent (43,726) of the labs seized with a capacity reported from 2008 through 2011 have been smaller capacity (less than 2 ounce) labs and about 74 percent (39,049) used the Nazi/Birch manufacturing process, of which the One Pot method is a variation. Less than 0.5 percent (219) of the labs seized during this period were super labs (labs producing 10 pounds or more of meth per batch), less than 13 percent (6,473) used the Red P method, and only 0.05 percent (26) of the labs seized during this period used the P-2-P method, which does not require PSE as a precursor chemical. Use of a method for meth producers to circumvent PSE sales restrictions. Another key factor federal, state, and local officials attribute to the increase in meth labs in recent years is the use of a method known as smurfing to work around PSE sales restrictions. Smurfing—which is discussed in greater detail later in this report— essentially involves the coordinated effort by individuals or groups of individuals to purchase the maximum per person legal allowable amount of PSE products and then aggregate their purchases for the use in meth production or for sale to a meth producer. Federal, state, and local officials stated that consequently, using this technique, meth producers have been able to obtain the PSE product they need to make meth despite the federal and state sales restrictions. This, in turn, has led to the proliferation of more labs. Further examination of data trends at the regional level reveals that the number of meth lab incidents varies greatly among regions of the country. Specifically, while the number of meth lab incidents continues to be low in the Northeast and declines in the number of meth labs incidents have been maintained in the West since PSE sales restrictions went into place, the South and Midwest regions have experienced significant increases overall in the number of incidents since 2007. Further, the South and Midwest have also had more lab incidents than the West and Northeast since 2003 (see fig.4). In general, these trends are consistent across all categories of lab types and capacities, except for incidents involving the P-2-P labs and labs of larger capacities (10 pounds or greater), for which the West tended to report higher numbers of incidents overall. Figure 5 shows lab incidents by state for the last decade (see app. II for this information by state). Move mouse over state names for meth lab incidents. N.Dak. Vt. Minn. N.H. S.Dak. Wis. N.Y. Mass. Mich. R.I. Nebr. Pa. Conn. N.J. Ill. Ind. Del. Colo. W.Va. Kans. Mo. Va. Md. D.C. Ky. N.C. Tenn. Okla. N.Mex. Ark. S.C. Miss. Ala. Ga. La. Tex. Fla. Meth labs can have a significant impact on a community’s health care system when labs catch on fire or explode, causing serious injuries and burns to meth cooks and other individuals that require costly medical treatment. Mixing chemicals in meth labs creates substantial risks of explosions, fires, chemical burns, and toxic fume inhalation. These burns and related injuries resulting from these events can be more serious than burns and injuries sustained through non-meth-lab-related causes. For example, a 2008 study conducted of meth and non-meth burn patients that received treatment in one hospital burn unit in Kalamazoo, Michigan, from 2001 through 2005, found that meth lab patients tended to have more frequent inhalation injuries, needed greater initial fluid resuscitation volume, required intubation more frequently, and were more likely to have The small size of the relatively complications than non-meth patients.new One Pot or Shake and Bake method can be even more dangerous than larger meth labs, as drugmakers typically hold the One Pot container up close, increasing the risk for severe burns from the waist to the face. According to the director of the Vanderbilt University Regional Burn Center in Tennessee, meth lab injuries can also be more severe than burns resulting from just fires alone because patients often suffer thermal burns from the explosion, as well as chemical burns from exposure to caustic chemicals. He also noted that meth lab burn patients tend to be more difficult to treat because their addiction and overall poor physical health make it difficult for them to facilitate their own recovery as well as the fact that most attempt to hide the cause of their injury, which can hinder the administration of proper care. The treatment for meth lab-related burns and injuries can be very expensive. According to one provider, treatment costs for two meth lab burn patients exceeded $2 million per patient. Although accurate estimates of the proportion of burn victims that received their burns from a meth lab are difficult, one estimate placed the percentage of meth lab burn patients at 25 to 35 percent of total burn patients. Of those patients that are identified as receiving their injuries from meth labs, many are found either not to have health insurance or have publicly funded insurance such as Medicaid. For example, the 2008 Kalamazoo study also found that significantly fewer meth burn patients had private insurance, while more were on Medicaid or had no insurance as compared with non-meth burn patients. As part of reporting a lab seizure to the DEA’s NSS, law enforcement is required to report on the number of children affected by the lab, such as those living at the site as well as those that might have visited the site. damage the brain, liver, kidney, spleen, and immunologic system; and result in birth defects. In addition to the physical dangers, children in environments where meth is being made are also reported to be at risk to suffer abuse or neglect by their parents or other adults. Parents and caregivers who are meth dependent can become careless and often lose their capacity to take care of their children such as ensuring their children’s safety and providing essential food, dental and medical care, and appropriate sleeping conditions. Children living in households where meth labs are operated are also at increased risk for being physically and sexually abused by members of their own family or other individuals at the site. To protect the children discovered at meth lab sites from further harm and neglect, social service agencies remove the children from their homes and place them in foster care. Foster care is a social welfare service that serves the needs of abused and neglected children. Child welfare workers can remove a child if it is determined that remaining with the parents will jeopardize a child’s welfare. Children are placed either with a surrogate foster family or in a residential treatment facility called a group home with the intent to provide temporary housing in a safe and stable environment until reunification with the child’s birth parents or legal guardians is possible. Reunification happens once the state is convinced that the harmful factors that triggered removal no longer exist. Several states and jurisdictions have created special protocols and programs to address the needs of children exposed to clandestine meth labs. These protocols and programs typically involve medical screening of the children for toxicity and malnourishment, emergency and long-term foster care, and psychological treatment. Social service agencies may also seek to enroll meth-involved parents and their children in a family-based treatment program, where both the parents and children receive services. Family-based treatment programs offer treatment for adults with substance use disorders and support services for their dependent children in a supervised, safe environment that allows the family to remain together and prevents exposure to further harm. The costs to state department of human service agencies to provide services to these children can be significant depending on the number, age, and specific needs of the child. For example, from January 2006, through December 2011, the Missouri Department of Social Services substantiated 702 reports of children exposed to meth labs, involving a total of 1,279 children. Of those 1,279 children, 653 required placement in departmental custody. The total cost of providing custodial care to children exposed to meth labs in Missouri since August 2005, was approximately $3.4 million according to the department. In one Missouri county, so many children were being removed from meth lab homes and placed in state custody that there are now no longer any foster families available to care for them. Similarly, according to the Tennessee Department of Children’s Services, 1,625 children were removed from meth lab homes from January 2007 through December 2011 and placed in foster care at a cost of approximately $70.1 million. The raw materials and waste of the meth labs pose environmental dangers because they are often disposed of indiscriminately by lab operators to avoid detection, and can also cause residual contamination of exposed surfaces of buildings and vehicles where the meth was being made. According to DEA, for every pound of meth produced, 5 to 6 pounds of toxic waste are produced. Common practices by meth lab operators include dumping this waste into bathtubs, sinks, or toilets, or outside on surrounding grounds or along roads and creeks. Some may place the waste in household or commercial trash or store it on the property. In addition to dumped waste, toxic vapors from the chemicals used and the meth-making process can permeate walls and ceilings of a home or building or the interior of a vehicle, potentially exposing unsuspecting occupants. As a result, the labs potentially end up contaminating the interiors of dwellings and vehicles as well as water sources and soil around the lab site for years if not treated. Because of the dangerous chemicals used in making meth, cleaning up clandestine methamphetamine labs is a complex and costly undertaking. According to regulations promulgated for the Resource Conservation and Recovery Act by the Environmental Protection Agency, the generator of hazardous waste is the person who produced or first caused the waste to be subject to regulation. The act of seizing a meth lab causes any chemicals to be subject to regulation and thus makes law enforcement the “generator” of the waste when seizing a lab. Accordingly, seizing a lab makes a law enforcement agency responsible for cleaning up the hazardous materials and the costs associated with the cleanup. The materials seized at a clandestine drug laboratory site become waste when law enforcement officials make the determination of what to keep as evidence. Those items not required as evidence are considered hazardous waste and must be disposed of safely and appropriately. The task of removal and disposal of the hazardous waste is usually left to contractors who have specialized training and equipment to remove the waste from the lab site and transport it to an EPA-regulated disposal facility. Depending on the size of the lab, the cost for such a service to respond to an average lab incident can range from $2,500 to $10,000, or up to as much as $150,000 to clean up super labs, according to DOJ. To help state and local agencies with the expense of lab cleanup, DEA established a lab cleanup program where DEA contracts with vendors and pays them to conduct the cleanup on behalf of the law enforcement agency seizing the lab. In fiscal year 1998, DEA began funding cleanups of clandestine drug labs that were seized by state and local law enforcement agencies, focusing on the removal and disposal of the chemicals, contaminated apparatus, and other equipment. State and local law enforcement agencies seeking to utilize this service contact the DEA to coordinate the cleanup effort. According to DEA program officials, DEA has spent over $142 million on these cleanups nationwide since calendar year 2002. See figure 6. Given that labs can be placed in a wide range of locations, such as apartments, motel rooms, homes, or even cars, there is also the potential need for further remediation of these areas beyond the initial cleanup of hazardous waste if they are to be safely used or occupied again. Whereas cleanup involves the removal of large-scale contaminants, such as equipment and large quantities of chemicals for the purpose of securing evidence for criminal investigations and reducing imminent hazards such as explosions or fires, remediation involves removing residual contaminants in carpeting or walls, for example, to eliminate the long-term hazards posed by residual chemicals. Procedures for remediation of a property or structure usually involve activities such as the removal of contaminated items that cannot be cleaned, such as carpeting, and wallboard; ventilation; chemical neutralization of residues; washing with appropriate cleaning agents; and encapsulation or sealing of contaminants, among other activities. Depending on the extent of the contamination, the cost to remediate a property can be substantial.Extremely contaminated structures may require demolition. However, unlike the funding that is available for initial lab cleanup from DEA, there are no federal funds available for remediation, leaving the owner of a contaminated property responsible for the costs of any remediation to be done. Because of their toxic nature, meth labs pose a serious physical danger to law enforcement officers who come across or respond to them, and therefore must be handled using special protective equipment and training that are costly to law enforcement agencies. The process of cooking meth, which can result in eye and respiratory irritations, explosions and fires, toxic waste products, and contaminated surroundings, can be dangerous not only to the meth cook but also to persons who respond to or come across a lab, such as law enforcement officers. Because of the physical dangers posed by the labs, the Occupational Safety and Health Administration has established requirements for persons, including law enforcement, entering a clandestine lab. on hazardous waste operations, annual physical exams to monitor the ongoing medical condition of individuals involved in handling meth lab sites, and guidelines for protective equipment to be used when working in a lab. Consequently, whether the lab is raided by investigators or encountered by accident during the course of an investigation, first responders and police agencies are required to provide their personnel specialized training and equipment, such as hermetically sealed hazmat suits, to safely process a lab. suited up outside the lab as a backup team in case something happens with the lab and they need to respond, and at least one other officer on- site to provide security while the lab is being processed for evidence and cleanup. According to one estimate provided by a law enforcement agency in Indiana, the cost to the agency of the officers’ time as well as the protective equipment and processing supplies required to respond to a lab can exceed $2,000 per lab. Given these costs, law enforcement officials from all case study states agreed that responding to meth labs can be a significant financial burden on their agencies. For example, in fiscal year 2010, the Tennessee Meth Task Force spent $3.1 million providing equipment and training to law enforcement personnel and responding to meth lab incidents. Further, unlike large multinational drug- trafficking organizations, meth lab operators are usually lower income and producing meth for personal use; thus operators usually have little in the way of valuable assets or cash that law enforcement agencies can seize as a way of recouping the lab seizure response costs. Electronic tracking systems can help prevent individuals from purchasing more PSE product than allowed by law. By electronically automating and linking logbook information on PSE sales and monitoring sales in real time, stop sale electronic tracking systems can block individuals attempting to purchase more than the daily or monthly PSE limits allowed by federal or state laws. All sales in states using the NPLEx system are linked; thus the system can also be used to block individuals who attempt to purchase more than the allowable amount of PSE in any state using the NPLEx system. According to data provided by the vendor that provides the NPLEx software platform, in 2011, the system was used to block the sale of more than 480,000 boxes and 1,142,000 grams of PSE products in 11 states. Similarly, as of July 31, 2012, the system was used to block the sale of more than 576,000 boxes and 1,412,000 grams of PSE products in the 17 states using the system in 2012. See table 1. By automating the logbook requirement set forth by the CMEA, electronic tracking systems can make PSE sales information more accessible to law enforcement to help it investigate potential PSE diversion, find meth labs, and prosecute individuals for meth-related crimes. Law enforcement officials we spoke with in all four case study states that use electronic tracking systems reported using the systems for one or more of these purposes. For example, officers from a Tennessee narcotics task force told us how they use the NPLEx system to help identify the diversion of PSE for meth production. According to these officers, the NPLEx system provides them with both real-time and on-demand access to pharmacy logs via a website and includes automated tools that enable them to monitor suspicious buying patterns or specific individuals.particular case, the taskforce used NPLEx’s monitoring tools to place a watch on a specific individual previously identified as being involved in illegal meth activity. When the individual subsequently purchased PSE, the task force received a notification e-mail of the purchase and upon In one further investigation was able to determine that the individual had sold the PSE to a Mississippi meth cook. Some law enforcement officials in our four case study states reported that they do not actively use the electronic tracking systems for investigations but rather rely on other sources such as informants, meth hotlines, citizen complaints, and routine traffic stops to identify potential diversion and meth labs. Nevertheless, these officials acknowledged using these systems to obtain evidence needed to prosecute meth-related crimes after meth labs have been found. For example, a law enforcement official in Iowa noted that after officials have identified a suspected lab operator or smurfer, they can use the data in NPLEx to help build their case for prosecution or sentencing by using the records to estimate the amount of PSE that was potentially diverted for meth production. They can also determine for which retailers they need to obtain video evidence to confirm their identity of the individual making the purchase. Law enforcement officials in Indiana and Tennessee, two states that recently moved from lead-generating systems to the NPLEx stop sale system, reported some challenges with NPLEx as a diversion Prior to the implementation of NPLEx, law investigation tool.enforcement was able to use the lead-generating systems in place to identify individuals who exceeded purchase limits and then take enforcement action or obtain a search warrant based upon the criminal offense. However according to these officials, given that NPLEx blocks individuals from exceeding purchase limits, individuals involved in diversion are no longer as readily identifiable as persons of interest and it now takes longer and is more labor intensive to investigate potential PSE diversions, as they no longer have arrest warrants as a tool to get into a residence suspected of having a meth lab. While electronic tracking systems such as NPLEx are designed to prevent individuals from purchasing more PSE than allowed by law, meth cooks have been able to limit the effectiveness of such systems as a means to reduce diversion through the practice of smurfing. Smurfing is a technique meth cooks use to obtain large quantities of PSE by recruiting individuals or groups of individuals to purchase the legal allowable amount of PSE products at multiple stores, and then aggregate for meth production. By spreading out PSE sales among individuals, smurfing circumvents the preventive blocking of stop sale tracking systems. Meth lab incidents in states that have implemented electronic tracking systems have not declined, in part because of smurfing. For example, meth lab incidents in the three states—Oklahoma, Kentucky, and Tennessee—that have been using electronic tracking systems for the longest period of time are at their highest levels since the implementation of state and federal PSE sales restrictions. While these states experienced initial declines in meth lab incidents immediately following the state and federal PSE sales restrictions put in place from 2004 through 2006, lab incidents have continued to rise since 2007, likely in part because of the emergence of smurfing and the use of the One Pot method for production (see table 2). Law enforcement officials from every region of the country report that the PSE used for meth production in their areas can be sourced to local and regional smurfing operations. The methods, size, and sophistication of these operations can vary considerably—from meth users recruiting family members or friends to purchase PSE for their own individual labs to larger-scale operations where groups purchase and sell large quantities of PSE to brokers for substantial profits, who in turn often sell the PSE to Mexican drug-trafficking organizations operating super labs in California.homeless, college students, the mentally handicapped, and inner city gang members, among others. Individuals recruited for smurfing have included the elderly, The use of fake identification by smurfs is an area of growing concern for law enforcement. Smurfs can use several different false IDs to purchase PSE above the legal limit without being detected or blocked by a tracking system. For example, in 2012, through a routine traffic stop, state and local law enforcement officials in Tennessee identified a smurfing ring where a group of at least eight individuals had used more than 70 false IDs over a 9-month period to obtain over 664 grams of PSE. All of the IDs had been used to purchase the maximum amount of PSE allowed, with only one transaction (2.4 grams of PSE) blocked by the electronic tracking system. Law enforcement officials from the four electronic tracking case study states emphasized that investigating smurfing rings can be very time and resource intensive because of the large number of persons involved and the potential use of fraudulent identifications. The use of fake IDs for smurfing can also affect the use of electronic tracking systems as tools to assist in the prosecution of meth-related crimes. According to the National Methamphetamine & Pharmaceuticals Initiative (NMPI) advisory board, smurfers are increasingly utilizing fake identification and “corrupting” electronic tracking databases to the point where prosecutors prefer eyewitness accounts and investigation (law enforcement surveillance) of violations before filing charges or authorizing arrests or search warrants.investigations. This results in costly man-power-intensive In summary, based on the experience of states that have implemented electronic tracking, while it has not reduced meth lab incidents overall, this approach has had general impacts, but also potential limitations, including the following: Under the current arrangement with CHPA, the operating expenses of NPLEx are paid for by PSE manufacturers and provided to the states at no cost. Automating the purchase logbooks required by the CMEA and making the logbook information available in an electronic format to law enforcement is reported to be a significant improvement over paper logs that have to be manually collected and reviewed. This record- keeping ability is reported to have also been useful in developing and prosecuting cases against individuals who have diverted PSE for meth production. Electronic tracking maintains the current availability of PSE as an OTC product under limits already in place through the CMEA and related state laws. The NPLEx system helps to block attempts by a consumer using a single identification to purchase PSE products in amounts that exceed the legal limit, and can prevent excessive purchases made at one or more locations. Although PSE manufacturers currently pay for the NPLEx system, depending on the circumstances, their financial support may not necessarily be sustained in the future. Although electronic tracking can be used to block sales of more than the legal amount to an individual using a given identification, through the practice of smurfing, individuals can undermine this feature and PSE sales limits by recruiting others to purchase on their behalf or by fraudulently using another identification to make PSE purchases. According to some law enforcement officials, the stop sale approach of the NPLEx system makes it more challenging to use the system as an investigative tool than a lead-generating system because it prevents individuals from exceeding purchase limits, which would otherwise make them more readily identifiable to law enforcement as persons of interest. The practice by smurfers of using fraudulent identification to purchase PSE products has been reported to diminish the ability of electronic tracking systems to assist in the prosecution of meth related crimes. According to some law enforcement officials, the rising use of fraudulent identifications has also increased the need to gather eyewitness accounts or conduct visual surveillance to confirm the identities of the individuals, a development that in turn has been reported to lead to more time- and resource-intensive investigations. The number of reported meth lab incidents in both Oregon and Mississippi declined following the adoption by those states of the prescription-only approach for PSE product sales (see fig. 7). In the case of Oregon, the number of reported meth lab incidents had already declined by nearly 63 percent by 2005 from their 2004 peak of over 600 labs. After the movement of PSE products to behind-the-counter status in Oregon in 2005 and implementation of the CMEA and state-imposed prescription-only approach in 2006, the number of reported meth lab incidents in Oregon continued to decline in subsequent years. In Mississippi, after the adoption of the prescription-only approach in 2010, the number of reported meth lab incidents subsequently declined from their peak by 66 percent to approximately 321 labs in 2011. See fig.7 below. The communities in Missouri that have adopted local prescription-only requirements also experienced a decline in the number of meth labs. For example, while lab incidents statewide in Missouri increased nearly 7 percent from 2010 to 2011, the area in southeastern Missouri where most of the communities have adopted prescription-only ordinances saw lab incidents decrease by nearly half. Even as declines were observed in Oregon and Mississippi after implementing the prescription-only approach, declines were also observed in neighboring states that did not implement the approach, possibly because of other regional or reporting factors. For example, all states bordering Oregon also experienced significant declines in meth labs from 2005 through 2011, ranging from a 76 percent decline for California to a 94 percent decline for Washington state. In Mississippi’s case, except for Tennessee, all bordering states also experienced declines in lab incidents from 2009 through 2011, ranging from a 54 percent decrease in Arkansas to a decline of 57 percent in Louisiana. Consequently, there may be some other factors that contributed to the lab incident declines across all these states regardless of the approach chosen. One potential factor for the declines observed from 2010 through 2011 is the exhaustion of DEA funds to clean up labs. According to DEA officials, as the funds provide an incentive to state and local agencies to report meth lab incidents to DEA, the lack of funds from February 2011 to October 2011 may have resulted in fewer lab incidents being reported during this time period. Other potential factors within the states may have also contributed to declines in the number of lab incidents in neighboring states. For example, Arkansas law enforcement officials reported that in 2011, a change in state law took effect that made it illegal to dispense PSE products without a prescription, unless the person purchasing the product provided a driver’s license or identification card issued by the state of Arkansas, or an identification card issued by the United States Department of Defense to active military personnel. In addition, Arkansas law requires that a pharmacist make a professional determination as to whether or not there is a legitimate medical and pharmaceutical need before dispensing a nonexempt PSE product without a valid prescription. As a result of these additional requirements, retailers such as Walmart decided to no longer sell PSE products OTC in Arkansas and instead require a prescription. According to state and local law enforcement officials in Oregon and Mississippi, the prescription-only approach contributed to the reduction of reported meth lab incidents within those states. For example, according to the executive director of the Oregon Criminal Justice Commission and the directors of the Mississippi Bureau of Narcotics and the Gulf Coast HIDTA, the decline in meth lab incidents in their states can be largely attributed to the implementation of the prescription-only approach. Although their perspectives cannot be generalized across the broader population of local law enforcement agencies, law enforcement officials of other agencies we met with in Oregon and Mississippi also credited the reduction in meth lab incidents to the implementation of the prescription- only approach. To determine the extent to which the declines in lab incidents in Oregon were due to the prescription-only approach rather than other variables such as regional or reporting factors, we conducted statistical modeling analysis of lab incident data, the results of which indicate a strong association between the prescription-only approach and a decline in meth lab incidents. Specifically, our analysis showed a statistically significant associated decrease in the number of lab incidents in Oregon following introduction of the law, with the lab incident rate falling by over 90 percent after adjusting for other factors. With the decline in meth lab incidents, officials in the prescription-only states reported observing related declines in the demand and utilization for law enforcement, child welfare, and environmental cleanup services that are needed to respond to meth labs: Law enforcement: Local law enforcement officials in Oregon and Mississippi reported that the reduction in meth lab incidents has reduced the resource and workload demands for their departments to respond to and investigate meth labs. For example, one chief of a municipal police department in Oregon reported that the decline in meth labs has resulted in reduced costs to his department largely in the form of reduced manpower, training, and equipment expenses and noted that lab seizures are now so rare that his department no longer maintains a specialized team of responders to meth labs. Another chief of a municipal police department in Mississippi noted that since the adoption of the prescription-only approach, the amount of time and resources spent on meth-related investigations has declined by at least 10 percent. Child welfare: Officials in both Oregon and Mississippi reported a reduction in the demand for child welfare services to assist children found in households where meth lab incidents occurred. For example, according to a coordinator in Oregon’s Department of Human Services, the state has not removed a child from a household with an active lab since 2007. In Mississippi, the Methamphetamine Field Coordinator with the state Bureau of Narcotics, which tracks the number of drug-endangered children for the state, reported that the number of such children declined by 81 percent in the first year that the prescription-only approach was in effect. Environmental cleanup: According to data from DEA and the Oregon Department of Environmental Quality, declines in costs to clean up labs in Oregon occurred prior to the implementation of the prescription-only approach, falling from almost $980,000 in 2002 to about $580,000 in 2005. However, since 2006, costs for lab cleanup continued to fall and were about $43,000 in 2011. Funding for cleanups in Mississippi showed more variation and fluctuation from year to year; however, between 2010, when the prescription-only approach was implemented, and 2011, cleanup costs dropped by more than half (from over $1 million to less than $400,000). However, even as the prescription-only approach appears to have contributed to reducing the number of lab incidents in Oregon, the availability and trafficking of meth is still widespread and a serious threat in the state. According to a threat assessment by the Oregon HIDTA, while the number of reported meth lab incidents has declined, crystal meth continues to be highly available in the area as Mexican drug traffickers import the finished product from laboratories outside the state and from Mexico. Moreover, while the prescription-only approach appears to have contributed to a reduction in the number of meth labs in the states that have adopted it, the experience of these states to date has shown that the prescription-only approach does not preclude individuals from traveling to neighboring states to purchase PSE products for use in meth labs. Consequently, even as the number of meth lab incidents has declined in prescription-only states, law enforcement reports that many lab incidents that still occur in these states are largely due to PSE product obtained from states without a prescription requirement for PSE. For example, according to a threat assessment by NDIC, law enforcement officers interviewed in 2011 reported that the more stringent restrictions on pseudoephedrine sales in Mississippi have led many pseudoephedrine smurfing groups to target pharmacies in the neighboring states of Alabama, Louisiana, and Tennessee in order to continue operations. Officials of a sheriff’s office in a county located along the Gulf Coast in Mississippi stated that the department’s investigations have found that large numbers of individuals from Mississippi travel out of state to purchase PSE in an effort to circumvent the Mississippi prescription-only law. While some out-of-state purchases may be for licit uses, the officials stated that they believed a substantial proportion of the PSE brought back from other states was likely being diverted for the production of meth. According to law enforcement officials in Oregon, most of the incidents reported there in recent years involved either dumpsites or inactive “boxed labs” that had been used in previous years but have been dismantled and stored away for potential future use. According to the legal counsel for the Oregon Narcotics Enforcement Association, the association asked law enforcement to determine the source of PSE for lab incidents, in cases where that could be determined. In every case where a determination could be made, it was reported that the PSE was obtained from neighboring states, mostly Washington, but also Idaho, California, and Nevada. According to PSE purchase activity data from the NPLEx electronic tracking system and the vendor that provides its software platform, individuals using Oregon identifications have purchased PSE products in neighboring states. These data indicate that from October 15, 2011, through August 31, 2012, over 30,000 purchases were made by individuals using Oregon identifications. Similarly for Mississippi, reports by law enforcement of individuals traveling to neighboring non- prescription-only states to purchase PSE products is supported by PSE purchase activity data provided by the NPLEx electronic tracking system. Since the time the NPLEx system has been implemented in these states, the PSE purchase activity data indicate that over 172,000 purchases have been made by individuals using Mississippi identifications. 2011 Ark. Acts 588. See Ark. Code Ann. §§ 5-64-1103 to -1105. seek to obtain the products in Alabama.laws is to extend the prescription-only requirement for Mississippi residents into Arkansas and Alabama. Officials from the Mississippi Bureau of Narcotics said these laws will help prevent PSE product from being obtained and diverted to Mississippi for use in meth labs. In essence, the impact of these In addition to obtaining PSE products from non-prescription states, another potential source of PSE for meth labs in prescription-only states and localities is through the illicit diversion of PSE obtained with a prescription. Similar to techniques used to divert other controlled prescription drugs such as pain relievers, diversion of prescribed PSE can occur through prescription forgery, illegal or improper prescribing by a physician, or “doctor shopping,” where an individual goes to several doctors to obtain a prescription from each doctor. Although these may provide potential sources of PSE for use in meth labs in prescription-only states, law enforcement officials in Oregon and Mississippi reported no known instances from their meth lab investigations of finding that a PSE product was obtained through one of those methods in order to make meth. Law enforcement officials in Missouri localities where the prescription-only requirement has been adopted reported a few instances of PSE obtained with a prescription being used to make meth. According to investigators from a regional drug task force in a county in Missouri, they have found PSE obtained by prescription in at least three meth lab incidents. Since the county has adopted the prescription-only approach, they are observing more instances in which prescriptions of PSE are found at lab incidents. However, they did not find any evidence in these cases that the PSE had been prescribed illegally or obtained through prescription forgery or doctor shopping. Judging from the experience of Mississippi, the volume of PSE products obtained by consumers after the adoption of the approach declined from levels that existed when PSE was available OTC. Data on Mississippi OTC PSE product sales and the number of prescriptions for PSE filled suggest that use of PSE products could have fallen by several hundred thousand units after the implementation of the prescription-only approach. For example, annual unit sales of PSE dropped from almost 749,000 in 2009 before the prescription-only approach went into effect, to approximately 480,000 total units of PSE product sold OTC or prescribed in 2010, when the approach was in effect for half the year, to approximately 191,000 units prescribed or sold during 2011, when the approach had been in place for the full year (see table 3). Data are not available for Oregon on the sales of PSE products immediately before and after the implementation of the prescription-only approach to do a comparable analysis. Given the more restrictive access to PSE products consumers would face under the prescription-only approach, it is expected that consumers will be impacted. The extent of this impact depends on a number of variables such as the potential change in the effective price of PSE that the requirements of the prescription-only approach result in and the availability of effective substitutes or alternative remedies for PSE, for example. Under the prescription-only approach, the effective price of PSE, which includes costs associated with obtaining a prescription, such as the costs of time and travel to the physician for an appointment as well as any associated copays or out-of-pocket charges for the appointment itself, would increase if an in-person visit were necessary, having a negative impact on consumers. If the PSE prescriptions are being obtained by consumers at a higher effective price because of these factors, consumers can be expected to be negatively impacted to some extent by the prescription-only approach. At the same time, some of these costs, such as the costs of time and travel to go to an in-person appointment can be mitigated to the extent patients can obtain a prescription for PSE through a telephone consultation with their physicians. While it is likely that the effective price for PSE products is higher under the prescription-only approach, data on the cost to consumers for obtaining these prescriptions are not available to make this comparison. Further complicating the determination of the change in the effective price of PSE is the fact that the actual costs to a given consumer for that person’s time, travel, and insurance coverage can vary from consumer to consumer depending on the person’s individual circumstances. For example, given their lack of insurance, uninsured consumers or patients will likely face higher effective costs to obtain PSE products under a prescription-only approach than those with insurance. Because of the uncertainty involving these variables and factors, it is not possible to determine the magnitude of the change in effective price of PSE for consumers. Despite the likely increase in the effective price of PSE because of the prescription-only approach, according to state agencies and consumer groups, consumers in Oregon and Mississippi have made few complaints about the approach since its implementation, although research or surveys on the issue have not been conducted. For example, according to the executive director of the Oregon Board of Pharmacy, the state agency that adopted the rule making PSE a controlled substance, the board initially received a small number of complaints from consumers when PSE was initially scheduled, but after a number of months, the board stopped hearing about it. Officials at the Mississippi Board of Pharmacy also noted that they have not received any complaints from consumers about the prescription requirement since it went into effect. According to consumer and patient advocacy organizations such as the National Consumers League and the Asthma and Allergy Foundation of America, which conducted surveys of consumers regarding access to PSE products in 2005 and 2010 respectively, neither organization has received feedback or complaints from consumers or patients from either state about the diminished access imposed on PSE products by the prescription-only approach. Both organizations also noted that they have not conducted any additional research or surveys on the issue since their earlier surveys in 2005 and 2010. Another variable that determines the impact of the prescription-only approach on consumers is the availability of substitutes for PSE that consumers can use as alternatives to offset any potential increase in the effective price to consumers for obtaining PSE by prescription. To ensure that consumers still have access to an unrestricted oral OTC decongestant, manufacturers of cold and allergy medicines reformulated many products by substituting the ingredient phenylephrine (PE), an alternative oral decongestant also approved by FDA for use in OTC medicines that cannot be used to make methamphetamine. However, according to sales data on PE products in Mississippi for the periods before and after implementation of the prescription-only approach, the changes in sales volume for PE products do not appear to show any direct substitution of PE for PSE by consumers. In fact, the change in volume in PE products shows a decrease for the 52-week period ending in December 2011 (see table 4). The lack of a consumer shift from PSE products to PE products could be the result of several potential factors, but data are limited or unavailable to ascertain their impact. For example, it could reflect, on average, consumer perception that PE is not an effective substitute for PSE. Similarly, it could also be an indication that consumers are choosing to forgo medicating their conditions or are using other medications or remedies to relieve their symptoms. Another potential factor that could contribute to this lack of a consumer shift to PE from PSE is the extent to which PSE sales were being diverted for meth use. Although available estimates of the extent to which PSE sales are being diverted vary greatly, the drop in PSE sales without a corresponding increase in PE product sales could also imply that some of the PSE sales were likely being diverted for meth production. According to officials of the market research firm that provided the PE sales data, another potential explanation for the lack of a distinct shift in demand for PE is the fact that several PE products had to be recalled by the manufacturer because of manufacturing issues. Industry has noted that PE has limitations as a direct substitute for PSE, and in 2007, FDA reexamined the effectiveness of PE at the approved dosing levels. At the request of citizen petitioners who claimed that the available scientific evidence did not demonstrate the effectiveness of PE at the approved 10-miligram dosage level, an FDA advisory committee reviewed the issue in December 2007, including two meta-analyses of After reviewing studies provided by the citizen petitioners and CHPA.this evidence, the committee concluded that, while additional studies would be useful to evaluate higher doses, the 10-miligram PE dose was effective. However, since the recommendation of the FDA advisory committee in 2007 to study the effectiveness of PE at higher dosage levels, it appears that limited work has been undertaken to do so. According to CHPA, while it agrees that the approved dosage levels of PE are effective, PE has known limitations that make it a less than viable substitute for PSE in some long-duration applications and for many consumers. As would be expected under the more restrictive prescription-only approach, consumers of PSE products would be negatively impacted to some extent by enactment of the prescription-only approach, considering the variables that determine the change in the effective price of PSE products. However, because of uncertainties related to these variables, such as consumers’ individual situations regarding insurance or the need for an in-person consultation with their physicians, the effectiveness of substitutes such as PE or use of other alternatives, and the extent to which PSE sales have been used for illicit purposes, the net effect on consumer welfare resulting from enactment of a prescription-only policy cannot be quantified. One of the concerns expressed by industry about the potential impact of the prescription-only approach is that it is likely to increase the workload of health care providers and the overall health care system to some extent. Both the Oregon and Mississippi laws required individuals to obtain a prescription from a health care provider which requires some type of visit or consultation with the provider. This visit or consultation requires increased provider workload to process the prescriptions. In addition, individuals who do not already have an established relationship with a health care provider may require a more involved, initial in-person visit to obtain a prescription, and pharmacies may experience increased workload because of new dispensing requirements. Assuming that health care providers charge prices that reflect the costs of providing these additional services, any increase in the workload of health care providers should get reflected in the office charge billed the patient. While the impact of the prescription-only approach on the health care system is generally unknown, on the basis of limited information available from health care providers in Oregon and Mississippi, it does not appear that there has been a substantial increase in workload demands to provide and dispense prescriptions for PSE products. According to a 2011 study commissioned by CHPA on managing access to PSE, judging from Oregon’s experience, the number of health care provider visits did not grow significantly, as consumers have noted obtaining a prescription via telephone or fax request. Officials from associations representing physicians in Oregon stated that their members have not reported any real impact on their practices, and their research from members suggests that the benefits of fewer meth labs outweighs any inconvenience for their membership of requests for prescriptions. Officials from the association representing Mississippi physicians similarly reported that from the perspective of a limited sample of its members involved in family practice, emergency room care, and addiction treatment, no increase had been observed in the demand for appointments from patients seeking PSE products. In addition, representatives from the association representing pharmacists in Oregon stated that they have received few complaints over the prescription-only requirement. Further, reports from the experience of Oregon and Mississippi indicate that there has not been a significant increase in cost to the states’ Medicaid programs. In terms of an impact on states’ Medicaid programs, officials in those states said there was no net change in their programs’ policy with the implementation of the prescription requirement statewide because their programs already required that participants obtain a prescription for PSE products if they wanted to have the medication covered under the states’ Medicaid pharmacy benefit formulary. In summary, on the basis of the experience of Oregon and Mississippi, the use of the prescription-only approach has had the following impacts: Its apparent effectiveness in reducing the availability of PSE for meth production has in turn helped to reduce or maintain a decline in the number of meth lab incidents in the states that have adopted the approach. The reduction in meth lab incidents has led to a corresponding decline in the demand or need by communities for child welfare, law enforcement, and environmental cleanup services to respond to the secondary impacts of the meth labs. Although it is difficult to quantify due to the lack of data and wide variation in the individual circumstances of consumers, the prescription-only approach has the potential for placing additional burdens on consumers to some extent. Increased the potential for additional workload and costs for the health care system to provide prescriptions for PSE products. From the limited information and data that are available to date, it is not clear that they have been substantial in the two states that have adopted the prescription-only approach to date. Increased possibility of consumers in a prescription-only state attempting to bypass the prescription-only requirement by purchasing PSE in a neighboring nonprescription state. We provided a draft of this report to the Department of Justice and ONDCP for comment. Justice and ONDCP did not provide written comments on the report draft, but both provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Attorney General, the Director of the Office of National Drug Control Policy, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Carol Cha at (202) 512-4456 or chac@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Our objectives were to identify (1) trends in domestic meth lab incidents over the last decade and the impact of domestic meth labs on the communities affected by them; (2) the impact of electronic tracking systems on domestic meth lab incidents and the limitations, if any, of using these systems; and (3) the impact of prescription-only laws on domestic meth lab incidents and any implications of this approach for consumers and the health care system. To identify the trends in domestic meth lab incidents over the last decade, we obtained and analyzed data for all states from the Drug Enforcement Administration’s (DEA) National Seizure System (NSS) data on nearly 149,000 lab seizure incidents that occurred during the last 10 calendar years, 2002 through 2011. Using these data, we analyzed the number of incidents nationally and by region and by type of lab (i.e., P-2-P, Nazi/Birch and One Pot, or Red Phosphorus) and lab capacity. To assess the reliability of these data, we discussed the sources of the data with agency officials knowledgeable about the data to determine data consistency and reasonableness and compared them with other supporting data and documentation, where available, from states selected to be case studies for this review. As reporting by state and local law enforcement agencies of lab incidents to DEA is voluntary except when DEA provides funds to the agencies for lab cleanup, because of the exhaustion of DEA’s cleanup funds less than half way through fiscal year 2011, the number of lab incidents reported for 2011 could be biased downward as compared with the number of incidents in previous years. We discussed this issue and its potential implications with DEA officials that manage the collection of the data and the steps they have taken to address it. From these efforts and discussions, we determined that the data were sufficiently reliable for the purposes of this report. To identify key factors that influenced lab seizure incident trends over time, we obtained the perspectives and information on meth lab incident trends and factors influencing this trend from state and local officials we interviewed in states that were selected as case studies. This nonprobability sample of states was selected to reflect a mix of characteristics such as the type of approach chosen for controlling the sale of pseudoephedrine (PSE) products (electronic tracking or prescription-only), length of time the approach has been in use, and the number of meth labs seized relative to the state’s population size. The states selected for inclusion as case study states included Iowa (electronic tracking), Kentucky (electronic tracking), Mississippi (prescription-only), Missouri (electronic tracking), Oregon (prescription- only), and Tennessee (electronic tracking). While we cannot generalize any findings or results to the national level from our sample of states visited for our case studies, the information from these states provided perspective on meth lab trends and the experiences of the states in implementing these approaches. We also reviewed drug threat assessments and reports by the National Drug Intelligence Center (NDIC) and information from officials with DEA and the Office of National Drug Control Policy (ONDCP). We reviewed the methodology of the assessments and reports and found them sufficiently reliable to provide perspectives on meth lab incident trends and factors influencing these trends. We obtained additional information and input regarding factors that contributed to meth lab incident trends from federal, state, and local officials participating in the May 2012 conference of the National Methamphetamine and Pharmaceutical Initiative (NMPI), a national initiative funded by ONDCP. To determine the impact of domestic meth labs on the communities affected by them, we first reviewed a variety of reports and studies on meth labs and their impacts from sources such as the Department of Justice (DOJ), DEA, the RAND corporation, media reports, and published academic research to identify the particular areas or ways that communities are directly affected as a result of the presence of labs. On the basis of this review, we identified the key ways communities are impacted by meth labs. These included the provision of health care to meth lab burn victims, threats and dangers posed to the welfare of children, environmental damage, and increased demand and workload for law enforcement agencies. While there are other areas or ways that can be impacted by meth labs, such as treatment for health-related conditions related to meth abuse and the demand for addiction treatment, these impacts are caused by the abuse of both imported and domestically produced meth and are not impacts unique to meth labs. Therefore, we did not include those areas in our review. To describe impacts on health care providers to administer care to meth lab operators injured or burned by their labs, we reviewed and synthesized information from published academic research comparing the injuries and treatment provided to meth-lab-burn victims as compared with non-meth lab burn related patients, documentation from DOJ on meth labs, and media reports on the reported impacts of meth labs on hospital burn centers. We also interviewed the director of the burn center at the Vanderbilt University Hospital in Nashville, Tennessee, to get his perspective, as the center has treated a significant number of burn patients that received their injuries from a meth lab. To describe impacts of meth labs on child welfare, we reviewed and synthesized information from DOJ on drug- endangered children, meth lab incident data from DEA on the number of children reported to be affected by the labs, and published academic research on the impact of meth abuse on the need for foster care. To describe environmental damage caused by meth labs, we reviewed and synthesized information from DOJ on the impact of meth labs, DEA’s guidance for meth lab cleanup, and a report from the DOJ Inspector General on DEA’s meth lab cleanup program. For context, we also obtained information from DEA on its clandestine lab cleanup program and the funds expended on the program to assist state and local law enforcement agencies in cleaning up meth labs from 2002 through 2011. In addition, we obtained and analyzed information from the case study states of Mississippi, Missouri, and Oregon on any funds state agencies spent on the cleanup of meth labs. To describe impacts of meth labs on law enforcement agencies in communities, we reviewed and synthesized information from DEA’s guidance for meth lab cleanup, documentation from DOJ on meth labs, as well as information from state and local law enforcement officials we interviewed from our case study states. To determine the impact of electronic tracking systems on domestic meth lab incidents, we analyzed DEA NSS data on the number of meth lab incidents that were reported in the 3 states that have implemented electronic tracking the longest—Kentucky, Oklahoma, and Tennessee— from 2002 through 2011 to identify any trends in lab incidents that occurred within those states before and after the implementation of electronic tracking within those states. To examine the volume of PSE sales activities the national electronic tracking system monitors and blocks when necessary, we obtained and reviewed PSE purchase activity data (purchases, blocks, and exceedances) for 2011 and 2012 from Appriss, the software firm that developed and manages the software program MethCheck, which is used as the operational platform for the National Precursor Log Exchange (NPLEx), the interstate electronic tracking system paid for by manufacturers of PSE products. We chose this time period because those were the most recent years for which data from multiple states were available. To assess the reliability of these data, we discussed the data with Appriss officials. From these efforts and discussions, we determined that the data were sufficiently reliable for the purposes of this report. To understand how electronic tracking works in practice and the limitations of this approach, we obtained information from officials with Appriss as well as officials with state and local law enforcement and the High Intensity Drug Trafficking Areas (HIDTA) in our electronic tracking case study states of Iowa, Kentucky, Missouri, and Tennessee. For these state and local law enforcement officials, we utilized a snowball sampling methodology in which we initially contacted key law enforcement officials in those states involved in dealing with the meth lab problem who identified and provided contacts for other officials in those states to meet with. From these state and local law enforcement officials, we obtained information and their perspectives on the use of electronic tracking, its impact on the meth lab problem within their jurisdictions, and any potential advantages or limitations of the approach identified through their investigations and experience with the system to date. Although their perspectives cannot be generalized across the broader population of state and local law enforcement agencies in electronic tracking states, their perspectives provided insights into and information on the use and impact of the approach in practice and its limitations. To determine the impact of prescription-only laws on domestic meth lab incidents and any implications of adopting this approach for consumers and the health care system, we analyzed DEA NSS data on the number of meth lab incidents that were reported in the prescription-only states of Mississippi and Oregon and their border states (Alabama, Arkansas, California, Idaho, Louisiana, Nevada, Tennessee, and Washington) from 2002 through 2011 to identify any trends in lab incidents that occurred within those states before and after the implementation of the prescription-only approach. To determine the impact of the prescription- only approach on meth lab incidents in Oregon, we conducted a statistical modeling analysis of the lab incident data that controlled for other factors such as region of the country, ethnic composition of the state population, the proportion of the state population that is male, distance from the Mexican border, and the state drug arrest rate, among others. For more details on the methodology used for this analysis, see appendix III. To determine the impact of the prescription-only approach in counties and localities in Missouri that have adopted the approach, we also obtained and analyzed information from local officials in Missouri on how meth lab incidents have been impacted since the adoption of the approach within their jurisdictions. To obtain the perspective of state and local officials on the impact of the implementation of the prescription-only approach in their states and localities, we utilized a snowball sampling methodology in which we initially contacted key law enforcement officials involved in dealing with the meth lab problem or associations representing law enforcement in Mississippi, Missouri, and Oregon who then identified and provided contacts of other officials within their states for us to meet with. We interviewed these officials to obtain their perspectives on the impact of the prescription-only approach on the meth lab problem as well as the perceived impacts on other areas, where possible, such as the demand for law enforcement, child welfare, environmental cleanup, and the trafficking of meth within their states. Although their perspectives on these impacts cannot be generalized across the broader population of state and local law enforcement agencies in prescription-only states, their perspectives provided insights into and information on the impact of the approach in practice. To determine the extent to which individuals in prescription-only states have been traveling to neighboring states to obtain PSE product without a prescription or have diverted PSE product obtained with a prescription, we interviewed and obtained information from local law enforcement officials in Mississippi, Missouri, and Oregon on what they have found in their investigations into meth labs and PSE smurfing. We also obtained and reviewed NPLEx data on PSE purchase activity from Appriss for PSE purchases made in Washington state with identifications issued by Oregon from October 15, 2011, to the most recent full month available (August 2012). We chose the starting date of October 15, 2011, because that was the date that Washington state implemented the NPLEx system statewide. To gauge the extent of PSE sales made in other states neighboring Oregon (California, Idaho, and Nevada) to individuals using identifications issued by Oregon that had not implemented NPLEx but had retailers within the states that used the NPLEx MethCheck software program, we obtained and reviewed the MethCheck log data on PSE purchase activity for those states for the same October 15, 2011, to August 2012 time period. For Mississippi, we obtained and reviewed NPLEx data on PSE purchase activity for purchases made with identifications from Mississippi in the NPLEx states neighboring Mississippi (Alabama, Louisiana, and Tennessee) between the time those states joined NPLEx to July 2012. To determine the impact of the prescription-only approach on consumers in Mississippi, we obtained data from IMS Health Inc. through DEA on the volume of PSE sales for three 52-week periods ending in December 2009, 2010, and 2011 and analyzed the data for any changes in volume over time, comparing the 2010 and 2011 periods when the prescription- only approach was in effect with the 2009 period when it was not. To assess the reliability of the data, we reviewed documentation and information from IMS Health officials knowledgeable about the data to determine data consistency and reasonableness. From these reviews, we determined that the data were sufficiently reliable for the purposes of this report. Because data prior to the period Oregon implemented the prescription-only approach in 2006 were not available, we were not able to do a similar analysis for Oregon. To examine the number of prescriptions filled in Mississippi for PSE medications, we obtained and reviewed data provided by the Mississippi Board of Pharmacy’s Prescription Drug Monitoring Program. To assess the reliability of the data, we discussed the data with officials that manage the program. From these efforts and discussions, we determined that the data were sufficiently reliable for the purposes of this report. To obtain additional information on the reported and estimated impacts of the prescription-only approach on consumers and the health care system, we reviewed a report of the potential impacts of the prescription-only approach prepared for the Consumer Healthcare Products Association (CHPA). To help obtain perspective on the potential impact on consumers, we asked the state boards of pharmacy and state associations representing pharmacists in Mississippi and Oregon, such as the Oregon State Pharmacists Association, about the extent to which complaints may have been made by consumers about the prescription- only approach. We also asked the National Consumers League and the Asthma and Allergy Foundation of America if they had received feedback or complaints from consumers on the impact of the prescription-only approach. We chose these organizations because they have previously surveyed consumers about access to PSE products. To understand the prescription-only approach’s impact on the workload demands for physicians, we obtained the perspective of state associations representing physicians practicing in Oregon and Mississippi, such as the Oregon Medical Association and the Mississippi State Medical Association, on the extent to which their members have reported seeing an increase in demand for appointments for PSE prescriptions and any corresponding increase in their workload. While their perspectives cannot be generalized to the larger population of physicians in these states, they provided insights into the impact of the approach on their members’ practices. To determine the impact of the prescription-only approach on the Medicaid programs within Mississippi and Oregon, we obtained perspectives and information from Medicaid program officials in those states on what, if any, changes the approach required of their prescription formulary and any resulting changes in program costs. We conducted this performance audit from February 2012 to January 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We evaluated the impact of the prescription-only pseudoephedrine requirement on domestic production of methamphetamine separately using state-level data. We chose Oregon which implemented its prescription-only pseudoephedrine requirement in 2006. In order to evaluate the impact of the policy, we performed multivariate regression analyses using generalized estimating equations (GEE) to compare the trend in lab seizures, reported to DEA between 2002 and 2010. We compared the case study state with a selected group of control states using a method that improves upon the commonly used Difference-in- Differences (DD) estimation method. We estimated robust standard errors for the DD coefficients by modeling the covariance structure in the GEEs. In addition to estimating a DD model, we, alternatively, estimated the intervention effect by comparing the case state to a single synthetic control using the synthetic control methods in comparison case studies following Abadie and colleagues and Nonnemaker and colleagues.These models are described in detail below. All data were annual state-level characteristics from 2001 through 2011 Each observation in the data represented a taken from multiple sources.state for a given year between 2002 and 2011. Some factors were lagged 1 year to account for a deterrent effect and to impute data missing for a later year. Eleven states were excluded from the final analyses as potential controls because they had implemented policies early in the postintervention period or because they were missing data on a key covariate; they include Arkansas, California, Hawaii, Kentucky, Louisiana, Iowa, Illinois, Tennessee, Oklahoma, Oregon, Mississippi, and Florida. Variables included in the analysis were similar to those controlled in other studies on the impact of precursors. Outcome variables: We modeled two outcome variables: the total lab seizure rate per 100,000 population and the small toxic lab seizure (STL) rate per 100,000 population. Small toxic labs are defined as labs with a capacity of 1 pound or less. Data on methamphetamine seizure incidents from the National Seizure System maintained by Drug Enforcement Administration’s El Paso Intelligence Center (DEA EPIC) were aggregated to get the number of methamphetamine lab seizures per state per year.estimate as a denominator multiplied by 100,000. It is expressed as the rate per 100,000 people. The rates were transformed by taking the log base 10 to approximate the normal distribution required for a linear model. The rates were computed using the Census annual population Other factors were controlled in this model. The control variables included the following: Client rate: The rate of substance abuse clients reported annually into the Substance Abuse and Mental Health Services Administration through the National Survey of Substance Abuse Treatment Services (N-SSATS) per 100,000 people. This factor is lagged 1 year to account for the possibility that the number of substance abuse clients has more of an impact on the future number of labs seized than the current number of labs. Lagging these data also allows us to make up for unavailable data in 2011. The client rate is not available for 2002. The 2001 value is used to impute that value. Region: Regional factors are expected to affect the methamphetamine problem and domestic production. We cannot identify or control for all of the potential factors that influence lab seizures for the region, so we include a set of dummy variables indicating the census division to approximate the potential influence of regional factors. Divisions include the following: 1 = New England 2 = Middle Atlantic 3 = East North Central 4 = West North Central 5 = South Atlantic 6 = East South Central 7 = West South Central 8 = Mountain 9 = Pacific (referent category). Demographics: Some demographic groups are more likely to use methamphetamine than other groups. We controlled for the demographic composition of the state population to account for potential demand for the drug. The percentage of the population that is non-Hispanic white, male, Hispanic, and under age 18 were computed annually for each state from Census intercensal population estimates. Distance to Mexico: The approximate number of miles between the state and the nearest Mexican border city was taken from Cunningham et al. (2010). The number of miles were included as a set of categories with the farthest distance (1,800 miles) as the reference category. This variable attempts to account for the supply of imported methamphetamine on domestic production. Funding: The Community Oriented Policing Services (COPS) funding amount from DEA was adjusted to 2012 dollars using the Consumer Price Index and divided by 1,000 to adjust the scale of the dollar amounts. This variable controlled for law enforcement activity specific to methamphetamine lab cleanups. It also helped to adjust for a possible downward bias in the 2011 reporting because of a discontinuation of COPS funding for a portion of that year. Police: The presence of police was measured as the annual number of employed law enforcement officers as a percentage of the total population. Police data came from the Uniform Crime Report (UCR) Law Enforcement Officers Killed in Action (LEOKA) data set. This factor was lagged 1 year to account for the possibility that the presence of police has a deterrent effect on the future number of labs seized. Lagging these data also allowed us to make up for unavailable data in 2011. Arrests: The drug arrest rate was measured as the number of drug arrests (UCR offense code 18) per 100,000 population. The data come from the Uniform Crime Reporting Program Data: Arrests by Age, Sex, and Race, Summarized Yearly. Data for Florida were not reported in this data set. This factor was lagged 1 year to account for the possibility that the number of drug arrests has a deterrent effect on the future number of labs seized. Lagging these data also allowed us to make up for unavailable data in 2011. While recent analyses of methamphetamine precursor laws have used relatively similar parsimonious models, our model may still be underspecified. For example, we did not control for alternative drug use. The DD model is a regression model that compares over time the outcomes for a unit of analysis that has been exposed to a treatment or intervention (referred to as a case) with the outcomes of at least one unit that has not been exposed to the treatment or intervention (referred to as a control). The case is exposed to the intervention at some point after the first period of time; the control is never exposed to the intervention during the course of the study. The impact of the intervention is represented by the difference in differences. In this case there are two differences. The first difference is between the average outcomes of the case and the control, respectively, in the post-intervention period and the preintervention period. The second difference subtracts the control difference between the two periods from the case difference. It can be written as equation 1. EQ. 1: DD = (y-bar Case,post- - y-bar Case,pre-) - (y-bar Control,post- - y-bar Control,pre-) For a DD model, the data consist of one observation for each geographic unit, which is represented by subscript i and each unit of time which is represented by subscript j. In our analysis, each observation represents a state in each year from 2000 through 2010. Since the interventions were implemented in 2006, the preintervention period spans 2000 through 2006. The post period spans 2007 through 2010. A dummy-variableindicating the postintervention period is specified; therefore, our DD model takes the form: EQ. 2: Yij= β+ βPost-Intervention Dummy + βOregon+ βPost-Intervention Dummy*Oregon + βTime+ βXij +εij Where Yij is the outcome for state i at period j; βDD estimation has some known limitations described in the academic literature. Beasley and Case (2000) describe the endogeniety of interventions, i.e., the fact that policies are made in response to the same conditions that lead to the outcome. Heckman (2000) and Bertrand and colleagues (2004) showed that because of serial correlation in the outcomes over time, difference-in-differences models tended to underestimate the standard error of the intervention coefficient and therefore overestimate the test statistic, leading to the interpretation of statistically significant differences between the case and control units. Abadie and colleagues (2010) argue that the selection of control units are made on the basis of subjective measures of affinity between case and control units and that there is uncertainty in the control units’ ability to produce the counterfactual outcome trend that the case would have experienced if the intervention has not taken place. This is an additional source of uncertainty beyond that measured by the standard error. We attempt to address these limitations in the analysis. To account for autocorrelation, we implemented this model using Generalized Estimating Equations (GEE) in SAS Proc Genmod with a repeated statement specifying the compound symmetry covariance structure to account for the autocorrelation in the covariates across time periods for each state. The covariance structure was determined by examining the working correlation matrix estimated when specifying an unstructured covariance structure and by comparing the quasi-likelihood indicator criteria (QIC) statistics for models specifying five different covariance structures: independence, compound symmetry, first-order autocorrelation, unstructured, and 1-dependent. The unstructured covariance structure allows for correlations to be different in each comparison of times without any specific pattern. The unstructured working correlation matrix indicated high constant correlation over time. Since the correlations seem constant, a compound symmetry structure is more appropriate. The QIC values for the GEE models were similar with the independent, compound symmetry, and autocorrelation structures specified, but the QIC was usually lowest for the independence structure with autoregressive next. This indicates that those structures fit the model better. Independence in the measures across time is not a logical assumption given the nature of the data, and the structure of the correlation matrix specified by the unstructured covariance structure does not show declining correlations over time described in the autoregressive structure. The QIC supports our choice of a compound symmetry covariance structure. We validated our model findings using the synthetic control method. The synthetic control method introduced by Abadie and colleagues (2010) is a modification on the DD method that creates a data-driven synthetic control that represents the counterfactual of the case in the absence of the intervention. The synthetic control method has two advantages. It allows for transparency and objectivity in the selection of control. It also safeguards against extrapolation of the counterfactual by creating the synthetic control to match the case closely in the pre-intervention period. We implemented the synthetic control method in Stata using the synth ado program. The program uses the set control states to create a synthetic form of the case study state by weighting the control states. The treated and synthetic control states are matched on the outcome and any combination of covariates in the preintervention period so that the mean squared error of the prediction variables is minimized. Then the model interpolates the trajectory of the synthetic state over the postintervention period assuming that the intervention was not implemented. In preliminary analyses, we tested the robustness of the model matching the state and synthetic controls on the outcomes alone and on the outcomes and all covariates controlled in the GEE models. All results presented here are based on a model matching on the outcome and most covariates controlled in the GEE models. The synthetic control method does not generate a simple test statistic to determine whether the difference between the case study and synthetic control state is statistically significant. To test whether the results are likely to be found by chance in Oregon, we ran the model assigning Oregon’s neighboring states that met our criteria for inclusion as controls (Washington, Idaho, and Nevada) as the case study state and allowed the model to generate a synthetic control to compare what would have happened relative to the experience in each of those states. If the results were found to be similar to Oregon’s, then we could not dismiss the possibility that our findings for Oregon were due to chance. Prescription-only had significant impacts on lab seizure rates compared with a selected group of controls. Contrary to the findings in Cunningham et al. (2012) and Strauberg and Sharma (2012), our analysis found that lab seizure rate fell by more than 90 percent in Oregon after the prescription-only requirement was implemented after adjusting for other factors. While 90 percent seems very high, the estimate should be considered in the context that the rate has been declining and was relatively low before the policy was implemented. The impact of the prescription-only requirement was validated when the case study state was compared with an empirically generated synthetic control. The synthetic control method confirmed the direction of the impact in Oregon. Our placebo analysis that assigned Oregon’s neighbor states as the control state showed that the reductions seen in Oregon were not projected in those states, giving some indication that the Oregon reduction was not found by chance. We cannot determine the extent of the impact using the synthetic control method because of the poor fit of the model in the period prior to the policy’s implementation. Our analysis differs from the two recent studies cited above in the methodology, including the analytical approach and model specification, and the date on which the incident data were pulled. The key finding from the GEE model is the coefficient on the interaction between the case study state and the postintervention period indicators. Since the outcome data were transformed to improve the model fit, we back-transformed the coefficients for ease of interpretation. Four estimates are presented in Table 5. They represent the model specifications. Each group of covariates was modeled on the two outcomes described above: the lab seizure rate including all capacities and the small toxic lab seizure rate. The unadjusted model adjusts only for the policy, state, time effects, and the interaction between the case study state and the postintervention period indicators. The adjusted model adjusts for those factors and controls all covariates described above. The unadjusted impacts are interpreted as the percent change in the rate resulting from the implementation of the policy adjusting only for temporal factors. Adjusted factors are interpreted as the percent change in the rate resulting from the implementation of the requirement after controlling for other factors that may also affect the change in the seizure rate. Impacts are determined to be statistically significant if the p-value is less than 0.05. The key finding from the synthetic control model is the difference in the estimated lab seizure rate in the years after 2006 between the case study state and the synthetic control. Differences in the postintervention period can be attributed to the impact of the policy when the two match closely in the preintervention period. Since the states did not always have a close match in the preintervention period and the model does not generate a test statistic to indicate whether the differences between the case study and synthetic control are statistically significant, we do not present numerical results indicating the size of the impact of the policy from this analysis; instead we used the results to validate the direction of the findings of the GEE models. In addition to the contact named above, Kirk Kiester (Assistant Director), Charles Bausell, Rochelle R. Burns, Willie Commons III, Yvette Gutierrez- Thomas, Michele C. Fejfar, Christopher Hatscher, Eric Hauswirth, Eileen Larence, Linda S. Miller, Jessica Orr, and Monique Williams made significant contributions to the work. | Meth can be made by anyone using easily obtainable household goods and consumer products in labs, posing significant public safety and health risks and financial burdens to local communities and states where the labs are found. Meth cooks have discovered new, easier ways to make more potent meth that require the use of precursor chemicals such as PSE. Some states have implemented electronic tracking systems that can be used to track PSE sales and determine if individuals comply with legal PSE purchase limits. Two states, along with select localities in another state, have made products containing PSE available to consumers by prescription only. GAO was asked to review issues related to meth. Thus, GAO examined, among other things, (1) the trends in domestic meth lab incidents over the last decade; (2) the impact of electronic tracking systems on meth lab incidents and limitations of this approach, if any; and (3) the impact of prescription-only laws on meth lab incidents and any implications of adopting this approach for consumers and the health care system. GAO analyzed data such as data on meth lab incidents and PSE product sales and prescriptions. GAO also reviewed studies and drug threat assessments and interviewed state and local officials from six states that had implemented these approaches. These states were selected on the basis of the type of approach chosen, length of time the approach had been in use, and the number of meth lab incidents. The observations from these states are not generalizable, but provided insights on how the approaches worked in practice. Methamphetamine (meth) lab incidents--seizures of labs, dumpsites, chemicals, and glassware--declined following state and federal sales restrictions on pseudoephedrine (PSE), an ingredient commonly found in over-the-counter cold and allergy medications, but they rose again after changes to methods in acquiring PSE and in the methods to produce meth. According to Drug Enforcement Administration (DEA) data, the number of lab incidents nationwide declined through 2007 after the implementation of state and federal regulations on PSE product sales, which started in 2004. The number of meth lab incidents reported nationally increased after 2007, a trend primarily attributed to (1) the emergence of a new technique for smaller-scale production and (2) a new method called smurfing--a technique used to obtain large quantities of PSE by recruiting groups of individuals to purchase the legally allowable amount of PSE products at multiple stores that are then aggregated for meth production. Electronic tracking systems help enforce PSE sales limits, but they have not reduced meth lab incidents and have limitations related to smurfing. By electronically automating and linking log-book information on PSE sales, these systems can block individuals from purchasing more than allowed by law. In addition, electronic tracking systems can help law enforcement investigate potential PSE diversion, find meth labs, and prosecute individuals. However, meth cooks have been able to limit the effectiveness of such systems as a means to reduce diversion through the practice of smurfing. The prescription-only approach for PSE appears to have contributed to reductions in lab incidents with unclear impacts on consumers and limited impacts on the health care system. The implementation of prescription-only laws by Oregon and Mississippi was followed by declines in lab incidents. Law enforcement officials in Oregon and Mississippi attribute this reduction in large part to the prescription-only approach. Prescription-only status appears to have reduced overall demand for PSE products, but overall welfare impacts on consumers are unclear because of the lack of data, such as the cost of obtaining prescriptions. On the basis of the limited information available from health care providers in Oregon and Mississippi, there has not been a substantial increase in workload demands to provide and dispense prescriptions for PSE products. |
On November 19, 2002, pursuant to ATSA, TSA began a 2-year pilot program at 5 airports using private screening companies to screen passengers and checked baggage. In 2004, at the completion of the pilot program, and in accordance with ATSA, TSA established the SPP, whereby any airport authority, whether involved in the pilot or not, could request a transition from federal screeners to private, contracted screeners. All of the 5 pilot airports that applied were approved to continue as part of the SPP, and since its establishment, 21 additional airport applications have been accepted by the SPP. In March 2012, TSA revised the SPP application to reflect requirements of the FAA Modernization Act, enacted in February 2012. Among other provisions, the act provides that Not later than 120 days after the date of receipt of an SPP application submitted by an airport operator, the TSA Administrator must approve or deny the application. The TSA Administrator shall approve an application if approval would not (1) compromise security, (2) detrimentally affect the cost-efficiency of the screening of passengers or property at the airport, or (3) detrimentally affect the effectiveness of the screening of passengers or property at the airport. Within 60 days of a denial, TSA must provide the airport operator, as well as the Committee on Commerce, Science, and Transportation of the Senate and the Committee on Homeland Security of the House of Representatives, a written report that sets forth the findings that served as the basis of the denial, the results of any cost or security analysis conducted in considering the application, and recommendations on how the airport operator can address the reasons for denial. All commercial airports are eligible to apply to the SPP. To apply, an airport operator must complete the SPP application and submit it to the SPP Program Management Office (PMO), as well as to the FSD for its airport, by mail, fax, or e-mail. Figure 1 illustrates the SPP application process. Although TSA provides all airports with the opportunity to apply for participation in the SPP, authority to approve or deny the application resides in the discretion of the TSA Administrator. According to TSA officials, in addition to the cost-efficiency and effectiveness considerations mandated by FAA Modernization Act, there are many other factors that are weighed in considering an airport’s application for SPP participation. For example, the potential impacts of any upcoming projects at the airport are considered. Once an airport is approved for SPP participation and a private screening contractor has been selected by TSA, the contract screening workforce assumes responsibility for screening passengers and their property and is required to adhere to the same security regulations, standard operating procedures, and other TSA security requirements followed by federal screeners at non-SPP airports. Since our December 2012 report, TSA has developed guidance to assist airport operators in completing their SPP applications, as we recommended. In December 2012, we reported that TSA had developed some resources to assist SPP applicants, but it had not provided guidance on its application and approval process to assist airports. As the application process was originally implemented in 2004, the SPP application process required only that an interested airport operator submit an application stating its intention to opt out of federal screening as well as its reasons for wanting to do so. In 2011, TSA revised its SPP application to reflect the “clear and substantial advantage” standard announced by the Administrator in January 2011. Specifically, TSA requested that the applicant explain how private screening at the airport would provide a clear and substantial advantage to TSA’s security operations. At that time, TSA did not provide written guidance to airports to assist them in understanding what would constitute a “clear and substantial advantage to TSA security operations” or TSA’s basis for determining whether an airport had met that standard. As previously noted, in March 2012 TSA again revised the SPP application in accordance with provisions of the FAA Modernization Act, which became law in February 2012. Among other things, the revised application no longer included the “clear and substantial advantage” question, but instead included questions that requested applicants to discuss how participating in the SPP would not compromise security at the airport and to identify potential areas where cost savings or efficiencies may be realized. In December 2012, we reported that while TSA provided general instructions for filling out the SPP application as well as responses to frequently asked questions (FAQ), the agency had not issued guidance to assist airports with completing the revised application nor explained to airports how it would evaluate applications given the changes brought about by the FAA Modernization Act. For example, neither the application instructions or the FAQs addressed TSA’s SPP application evaluation process or its basis for determining whether an airport’s entry into the SPP would compromise security or affect cost-efficiency and effectiveness. Further, we found that airport operators who completed the applications generally stated that they faced difficulties in doing so and that additional guidance would have been helpful. For example, one operator stated that he needed cost information to help demonstrate that his airport’s participation in the SPP would not detrimentally affect the cost-efficiency of the screening of passengers or property at the airport and that he believed not presenting this information would be detrimental to his airport’s application. However, TSA officials at the time said that airports do not need to provide this information to TSA because, as part of the application evaluation process, TSA conducts a detailed cost analysis using historical cost data from SPP and non-SPP airports. The absence of cost and other information in an individual airport’s application, TSA officials noted, would not materially affect the TSA Administrator’s decision on an SPP application. Therefore, we reported in December 2012 that while TSA had approved all applications submitted since enactment of the FAA Modernization Act, it was hard to determine how many more airports, if any, would have applied to the program had TSA provided application guidance and information to improve transparency of the SPP application process. Specifically, we reported that in the absence of such application guidance and information, it may be difficult for airport officials to evaluate whether their airports are good candidates for the SPP or determine what criteria TSA uses to accept and approve airports’ SPP applications. Further, we concluded that clear guidance for applying to the SPP could improve the transparency of the application process and help ensure that the existing application process is implemented in a consistent and uniform manner. Thus, we recommended that TSA develop guidance that clearly (1) states the criteria and process that TSA is using to assess whether participation in the SPP would compromise security or detrimentally affect the cost- efficiency or the effectiveness of the screening of passengers or property at the airport, (2) states how TSA will obtain and analyze cost information regarding screening cost-efficiency and effectiveness and the implications of not responding to the related application questions, and (3) provides specific examples of additional information airports should consider providing to TSA to help assess an airport’s suitability for the SPP. TSA concurred with our recommendation and has taken actions to address it. Specifically, TSA updated its SPP website in December 2012 by providing (1) general guidance to assist airports with completing the SPP application and (2) a description of the criteria and process the agency will use to assess airports’ applications to participate in the SPP. While the guidance states that TSA has no specific expectations of the information an airport could provide that may be pertinent to its application, it provides some examples of information TSA has found useful and that airports could consider providing to TSA to help assess their suitability for the program. Further, the guidance, in combination with the description of the SPP application evaluation process, outlines how TSA plans to analyze and use cost information regarding screening cost- efficiency and effectiveness. The guidance also states that providing cost information is optional and that not providing such information will not affect the application decision. We believe that these actions address the intent of our recommendation and should help improve transparency of the SPP application process as well as help airport officials determine whether their airports are good candidates for the SPP. In our December 2012 report, we analyzed screener performance data for four measures and found that there were differences in performance between SPP and non-SPP airports, and those differences could not be exclusively attributed to the use of either federal or private screeners. The four measures we selected to compare screener performance at SPP and non-SPP airports were Threat Image Projection (TIP) detection rates, recertification pass rates, Aviation Security Assessment Program (ASAP) test results, and Presence, Advisement, Communication, and Execution (PACE) evaluation results (see table 1). For each of these four measures, we compared the performance of each of the 16 airports then participating in the SPP with the average performance for each airport’s category (X, I, II, III, or IV), as well as the national performance averages for all airports for fiscal years 2009 through 2011. As we reported in December 2012, on the basis of our analyses, we found that, generally, certain SPP airports performed slightly above the airport category and national averages for some measures, while others performed slightly below. For example, SPP airports performed above their respective airport category averages for recertification pass rates in the majority of instances, while the majority of SPP airports that took PACE evaluations in 2011 performed below their airport category averages. For TIP detection rates, SPP airports performed above their respective airport category averages in about half of the instances. However, we also reported in December 2012 that the differences we observed in private and federal screener performance cannot be entirely attributed to the type of screeners at an airport, because, according to TSA officials and other subject matter experts, many factors, some of which cannot be controlled for, affect screener performance. These factors include, but are not limited to, checkpoint layout, airline schedules, seasonal changes in travel volume, and type of traveler. We also reported in December 2012 that TSA collects data on several other performance measures but, for various reasons, the data cannot be used to compare private and federal screener performance for the purposes of our review. For example, passenger wait time data could not be used because we found that TSA’s policy for collecting wait times changed during the time period of our analyses and that these data were not collected in a consistent manner across all airports. We also considered reviewing human capital measures such as attrition, absenteeism, and injury rates, but did not analyze these data because TSA’s Office of Human Capital does not collect these data for SPP airports. We reported that while the contractors collect and report this information to the SPP PMO, TSA does not validate the accuracy of the self-reported data nor does it require contractors to use the same human capital measures as TSA, and accordingly, differences may exist in how the metrics are defined and how the data are collected. Therefore, we found that TSA could not guarantee that a comparison of SPP and non- SPP airports on these human capital metrics would be an equal comparison. Since our December 2012 report, TSA has developed a mechanism to regularly monitor private versus federal screener performance, as we recommended. In December 2012, we reported that while TSA monitored screener performance at all airports, the agency did not monitor private screener performance separately from federal screener performance or conduct regular reviews comparing the performance of SPP and non-SPP airports. Beginning in April 2012, TSA introduced a new set of performance measures to assess screener performance at all airports (both SPP and non-SPP) in its Office of Security Operations Executive Scorecard (the Scorecard). Officials told us at the time of our December 2012 review that they provided the Scorecard to FSDs every 2 weeks to assist the FSDs with tracking performance against stated goals and with determining how performance of the airports under their jurisdiction compared with national averages. According to TSA, the 10 measures used in the Scorecard were selected based on input from FSDs and regional directors on the performance measures that most adequately reflected screener and airport performance. Performance measures in the Scorecard included the TIP detection rate, and the number of negative and positive customer contacts made to the TSA Contact Center through e-mails or phone calls per 100,000 passengers screened, among others. We also reported in December 2012 that TSA had conducted or commissioned prior reports comparing the cost and performance of SPP and non-SPP airports. For example, in 2004 and 2007, TSA commissioned reports prepared by private consultants, while in 2008 the agency issued its own report comparing the performance of SPP and non-SPP airports.performed at a level equal to or better than non-SPP airports. However, TSA officials stated at the time that they did not plan to conduct similar analyses in the future, and instead, they were using across-the-board mechanisms of both private and federal screeners, such as the Scorecard, to assess screener performance across all commercial airports. Generally, these reports found that SPP airports In addition to using the Scorecard, we found that TSA conducted monthly contractor performance management reviews (PMR) at each SPP airport to assess the contractor’s performance against the standards set in each SPP contract. The PMRs included 10 performance measures, including some of the same measures included in the Scorecard, such as TIP detection rates and recertification pass rates, for which TSA establishes acceptable quality levels of performance. Failure to meet the acceptable quality levels of performance could result in corrective actions or termination of the contract. However, as we reported in December 2012, the Scorecard and PMR did not provide a complete picture of screener performance at SPP airports because, while both mechanisms provided a snapshot of private screener performance at each SPP airport, this information was not summarized for the SPP as a whole or across years, which made it difficult to identify changes in performance. Further, neither the Scorecard nor the PMR provided information on performance in prior years or controlled for variables that TSA officials explained to us were important when comparing private and federal screener performance, such as the type of X-ray machine used for TIP detection rates. We concluded that monitoring private screener performance in comparison with federal screener performance was consistent with the statutory requirement that TSA enter into a contract with a private screening company only if the Administrator determines and certifies to Congress that the level of screening services and protection provided at an airport under a contract will be equal to or greater than the level that would be provided at the airport by federal government personnel. Therefore, we recommended that TSA develop a mechanism to regularly monitor private versus federal screener performance, which would better position the agency to know whether the level of screening services and protection provided at SPP airports continues to be equal to or greater than the level provided at non- SPP airports. TSA concurred with our recommendation, and has taken actions to address it. Specifically, in January 2013, TSA issued its first SPP Annual Report. The report highlights the accomplishments of the SPP during fiscal year 2012 and provides an overview and discussion of private versus federal screener cost and performance. The report also describes the criteria TSA used to select certain performance measures and reasons why other measures were not selected for its comparison of private and federal screener performance. The report compares the performance of SPP airports with the average performance of airports in their respective category, as well as the average performance for all airports, for three performance measures: TIP detection rates, recertification pass rates, and PACE evaluation results. Further, in September 2013, the TSA Assistant Administrator for Security Operations signed an operations directive that provides internal guidance for preparing the SPP Annual Report, including the requirement that the SPP PMO must annually verify that the level of screening services and protection provided at SPP airports is equal to or greater than the level that would be provided by federal screeners. We believe that these actions address the intent of our recommendation and should better position TSA to determine whether the level of screening services and protection provided at SPP airports continues to be equal to or greater than the level provided at non-SPP airports. Further, these actions could also assist TSA in identifying performance changes that could lead to improvements in the program and inform decision making regarding potential expansion of the SPP. Chairman Mica, Ranking Member Connolly, and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For questions about this statement, please contact Jennifer Grover at (202) 512-7141 or GroverJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Glenn Davis (Assistant Director), Stanley Kostyla, Brendan Kretzschmar, Thomas Lombardi, Erin O’Brien, and Jessica Orr. Key contributors for the previous work that this testimony is based on are listed in the product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | TSA maintains a federal workforce to screen passengers and baggage at the majority of the nation's commercial airports, but it also oversees a workforce of private screeners at airports who participate in the SPP. The SPP allows commercial airports to apply to have screening performed by private screeners, who are to provide a level of screening services and protection that equals or exceeds that of federal screeners. In recent years, TSA's SPP has evolved to incorporate changes in policy and federal law, prompting enhanced interest in measuring screener performance. This testimony addresses the extent to which TSA (1) has provided guidance to airport operators for the SPP application process and (2) assesses and monitors the performance of private and federal screeners. This statement is based on a report GAO issued in December 2012 and selected updates conducted in January 2014. To conduct the selected updates, GAO reviewed documentation, such as the SPP Annual Report issued in January 2013, and interviewed agency officials on the status of implementing GAO's recommendations. Since GAO reported on this issue in December 2012, the Transportation Security Administration (TSA) has developed application guidance for airport operators applying to the Screening Partnership Program (SPP). In December 2012, GAO reported that TSA had not provided guidance to airport operators on its application and approval process, which had been revised to reflect requirements in the Federal Aviation Administration Modernization and Reform Act of 2012. Further, airport operators GAO interviewed at the time generally stated that they faced difficulties completing the revised application, such as how to obtain cost information. Therefore, GAO recommended that TSA develop application guidance, and TSA concurred. To address GAO's recommendation, TSA updated its SPP website in December 2012 by providing general application guidance and a description of the criteria and process the agency uses to assess airports' SPP applications. The guidance provides examples of information that airports could consider providing to TSA to help assess their suitability for the program and also outlines how the agency will analyze cost information. The new guidance addresses the intent of GAO's recommendation and should help improve transparency of the SPP application process as well as help airport operators determine whether their airports are good candidates for the SPP. TSA has also developed a mechanism to regularly monitor private versus federal screener performance. In December 2012, GAO found differences in performance between SPP and non-SPP airports based on its analysis of screener performance data. However, while TSA had conducted or commissioned prior reports comparing the performance of SPP and non-SPP airports, TSA officials stated at the time that they did not plan to conduct similar analyses in the future, and instead stated that they were using across-the-board mechanisms to assess screener performance across all commercial airports. In December 2012, GAO found that these across-the-board mechanisms did not summarize information for the SPP as a whole or across years, which made it difficult to identify changes in private screener performance. GAO concluded that monitoring private screener performance in comparison with federal screener performance was consistent with the statutory provision authorizing TSA to enter into contracts with private screening companies and recommended that TSA develop a mechanism to regularly monitor private versus federal screener performance. TSA concurred with the recommendation. To address GAO's recommendation, in January 2013, TSA issued its first SPP Annual Report, which provides an analysis of private versus federal screener performance. Further, in September 2013, a TSA Assistant Administrator signed an operations directive that provides internal guidance for preparing the SPP Annual Report, including the requirement that the report annually verify that the level of screening services and protection provided at SPP airports is equal to or greater than the level that would be provided by federal screeners. These actions address the intent of GAO's recommendation and could assist TSA in identifying performance changes that could lead to improvements in the program. GAO is making no new recommendations in this statement. |
The mission of FAA, as a DOT agency, is to provide the safest, most efficient aerospace system in the world. To fulfill its mission, FAA must rely on an extensive use of technology, including many software-intensive systems. FAA constantly relies on the adequacy and reliability of the nation’s ATC system, which comprises a vast network of radars; automated data processing, navigation, and communications equipment; and ATC facilities. Through this system, FAA provides services such as controlling takeoffs and landings and managing the flow of traffic between airports. FAA is organized into several staff support offices and five lines of business, which include Airports, Aviation Safety, Commercial Space Transportation, the Office of Security and Hazardous Materials, and the newly formed ATO. The ATO was formed in February 2004 to, among other things, improve the provision of air traffic services and accelerate modernization efforts. To create the ATO, FAA combined its Research and Acquisition and Air Traffic Services into one performance-based organization, bringing together those who acquire systems and those who use them, respectively. The ATO is led by FAA’s chief operating officer, consists of 10 service units, and has 36,000 of FAA’s 48,000 employees. The ATO is the principal FAA organizational unit responsible for acquiring ATC systems through the use of the agency’s Acquisition Management System (AMS). Because FAA formerly contended that some of its modernization problems were caused by federal acquisition regulations, Congress enacted legislation in November 1995 that exempted the agency from most federal procurement laws and regulations and directed FAA to develop and implement a new acquisition management system that would address the unique needs of the agency. In April 1996, FAA implemented AMS. AMS was intended to reduce the time and cost of fielding new system acquisitions by introducing (1) a new investment system that spans the life cycle of an acquisition, (2) a new procurement system that provides flexibility in selecting and managing contractors, and (3) organizational and human capital reforms that support the new acquisition system. AMS provides high-level acquisition policy and guidance for selecting and controlling ATC system acquisitions through all phases of the acquisition life cycle, which is organized into a series of phases and decision points that include (1) mission analysis, (2) investment analysis, (3) solution implementation, and (4) in-service management. To select system acquisitions, FAA has two processes--mission analysis and investment analysis–that together constitute a set of policies and procedures, as well as guidance, that enhance the agency’s ability to screen system acquisitions submitted for funding. Also through these two processes, FAA assesses and ranks each system acquisition according to its relative costs, benefits, risks, and contribution to FAA’s mission; a senior, corporate-level decision- making group then selects system acquisitions for funding. After a system acquisition has been selected, FAA officials are required to formally establish the life-cycle cost, schedule, benefits, and performance targets— known as acquisition program baselines, which are used to monitor the status of the system acquisition throughout the remaining phases of its life cycle. Through its NAS modernization program, FAA is upgrading and replacing ATC facilities and equipment to help improve the system’s safety, efficiency, and capacity. These systems involve improvement in the areas of automation, communication, navigation and landing, surveillance, and weather to support the following five phases of flight (see fig. 1): Preflight – The pilot performs flight checks and the aircraft is pushed- back from the gate. For preflight, we looked at Collaborative Decision Making (CDM) and OASIS. Airport Surface – The aircraft taxis to the runway for takeoff or, after landing, to the destination gate to park at the terminal. For airport surface, we examined the Airport Surface Detection Equipment – Model X (ASDE-X). Terminal Departure – The aircraft lifts off the ground and climbs to a cruising altitude. For terminal departure, we examined the following systems: Airport Surveillance Radar (ASR-11), Integrated Terminal Weather System (ITWS), Local Area Augmentation System (LAAS), Standard Terminal Automation Replacement System (STARS), and Traffic Management Advisor (TMA). En route/Oceanic -- The aircraft travels through one or more center airspaces and approaches the destination airport. For en route and oceanic, we examined the following systems: Air Traffic Control Radar Beacon Interrogator-Replacement (ATCBI-6), Advanced Technologies and Oceanic Procedures (ATOP), Controller-Pilot Data Link Communications (CPDLC), and User Request Evaluation Tool (URET). Terminal Arrival -- The pilot lowers, maneuvers, aligns, and lands the aircraft on the destination airport’s designated landing runway. For terminal arrival, we looked at the systems already listed under terminal departure: ASR-11, ITWS, LAAS, STARS, and TMA. In addition, for the major ATC systems that support multiple phases of flight, we examined the following systems: En Route Communications Gateway (ECG), En Route Automation Modernization (ERAM), Next- Generation Air-to-Ground Communication (NEXCOM), and Wide Area Augmentation System (WAAS). Furthermore, for major ATC systems that support NAS infrastructure, we examined FAA Telecommunications Infrastructure (FTI) and NAS Infrastructure Management System (NIMS)– Phase Two. (See app. I for additional information on these 16 systems.) For more than two decades, FAA has experienced cost growth, schedule extensions, and/or performance problems in acquiring major systems under its ATC modernization program and has been on our list of high-risk programs since 1995. For example, 13 of the 16 major system acquisitions we reviewed in detail continue to experience cost, schedule, and/or performance shortfalls when assessed against their original baselines. The three other major system acquisitions that we reviewed in detail are currently operating within their original cost, schedule, and performance targets, but are experiencing challenges symptomatic of past problems. Of the remaining 39 system acquisitions within the ATC modernization program, few have had problems meeting cost and schedule targets. However, the ATO made progress during its first year of operation by meeting its acquisition goal for fiscal year 2004. Thirteen of the 16 major system acquisitions that we reviewed in detail for this engagement under the ATC modernization program have continued to experience cost growth, schedule delays, and/or performance problems when assessed against their original performance targets (see table 1). These major system acquisitions had total cost growth ranging from $1.1 million to about $1.5 billion over their original cost targets. In addition, these systems required extensions in their initial deployment schedules ranging from 1 to 13 years. Furthermore, several systems experienced safety-related performance problems. For 12 of the 13 major system acquisitions we reviewed in detail with cost, schedule, and performance shortfalls, one or more of the following four key factors contributed to these shortfalls: (1) The funding level received was less than called for in agency planning documents. Most major ATC system acquisitions have cost, schedule, and performance baselines that are approved by FAA’s Joint Resources Council--the agency’s body responsible for approving and overseeing major system acquisitions. Each baseline includes annual funding levels that the council agrees are needed for a system acquisition to meet its cost, schedule, and/or performance targets. The estimated cost for a given year assumes that the program received all funding for prior fiscal years as described in the baseline. In practice, however, this is not always the case. For example, when FAA’s budget level does not allow all system acquisitions to be fully funded at the levels approved in their baselines, FAA may elect to fully fund higher-priority acquisitions and provide less funding for lower-priority acquisitions than called for in their baselines. When a system acquisition does not receive the annual funding levels called for in its baseline, its ability to meet cost, schedule, and/or performance targets can be jeopardized, for example, by requiring the agency to defer funding for essential development or deployment activities until sufficient funding becomes available, which, in turn, could require FAA to maintain costly legacy systems until a new system is deployed. Receiving less funding than the agency approved for a given acquisition was a factor contributing to the inability of 8 of the 16 major system acquisitions we reviewed in detail to meet their cost, schedule, and/or performance targets. The ASR-11 acquisition, a digital radar system, illustrates how reduced funding has resulted in schedule delays. FAA officials stated that because of funding reductions and reprogramming, the program received $46.45 million less than requested for fiscal years 2004 and 2005 and program officials plan to request that the program’s deployment schedule be extended to 2013. According to FAA officials, in general, schedules for system acquisitions may slip under such circumstances (e.g., the rate of software development may be reduced and planned hardware and software deployments may be delayed). The ATO’s chief operating officer testified in April 2005 that receiving multiyear rather than annual funding from Congress for system acquisitions would help FAA to address this problem by providing funding stability for system acquisitions. In addition, according to a senior DOT official, 50 percent of cost growth is a result of an unstable funding stream. (2) The system acquisition experienced requirements growth and/or unplanned work. Requirements that are inadequate or poorly defined prior to developing a system may contribute to the inability of system acquisitions to meet their original cost, schedule, and/or performance targets. In addition, unplanned development work can occur when the agency misjudges the extent to which commercial-off-the-shelf (COTS)/ nondevelopmental item (NDI) solutions, such as those procured by another agency, will meet FAA’s needs. Requirements growth and/or unplanned work contributed to the inability of 7 of the 16 major system acquisitions we reviewed in detail to meet their cost, schedule, and/or performance targets. (3) Stakeholders were not sufficiently involved in design and development: Insufficient involvement of relevant stakeholders, such as air traffic controllers and maintenance technicians, throughout the development and approval processes for a system acquisition can lead to costly changes in requirements and unplanned work late in the development process. Not involving stakeholders sufficiently contributed to the inability of 4 of the 16 major system acquisitions to meet their cost, schedule, and/or performance targets. (4) The complexity of software development was underestimated. Underestimating the complexity of developing software for system acquisitions or the difficulty of modifying available software to fulfill FAA’s mission needs may contribute to unexpected software development, higher costs, and schedule delays. Underestimation contributed to the inability of 3 of the 16 major system acquisitions we reviewed in detail to meet their cost, schedule, and/or performance targets. (See table 2.) Several of the 16 major systems acquisitions we reviewed in detail effectively illustrate how these four factors can interact to contribute to cost growth, schedule extensions, and performance problems. For example, for WAAS, a precision approach and landing system augmented by satellites, two of the four key factors came into play: underestimation of software complexity and insufficient stakeholder involvement. Specifically, FAA underestimated the complexity of the software that would be needed to support this system when it accelerated the implementation of performance targets, which included moving up the commissioning of WAAS by 3 years. FAA originally planned to commission WAAS by 2000; however, at the urging of government and aviation industry groups in the 1990s, it decided to change the commissioning date to 1997. FAA then tried to develop, test, and deploy WAAS within 28 months, although the software development alone was expected to take 24 to 28 months. In retrospect, FAA acknowledged that the agency’s in-house technical expertise was not sufficient to address WAAS’s technical challenges and that expert stakeholders should have been involved earlier. Although WAAS was being developed by an integrated product team that included representatives from several FAA offices, the team did not effectively resolve problems in meeting a required performance capability—that pilots be warned in a timely manner when a system may be giving them potentially misleading and therefore hazardous information. Consequently, in 2000, FAA convened a panel of expert stakeholders to help it meet this requirement. These actions resulted in unplanned work and contributed to the rise in WAAS’s cost from the original estimate of $509 million in 1994 to $2.036 billion in 2005, and to a 6-year extension in its commissioning date. According to FAA, adding 6 years to the program’s life cycle also contributed to increased costs. Another example involves STARS, a joint program of FAA and DOD that replaced outdated monochromatic controller workstation monitors with multicolor monitors in ATC facilities. While joint FAA and DOD acquisitions offer the opportunity to leverage federal resources, in the case of STARS, the interaction of insufficient stakeholder involvement and subsequent unplanned work contributed to cost growth and schedule extensions. Specifically, FAA and DOD decided to acquire COTS equipment, rather than developing a new system. This strategy envisioned immediately deploying STARS to the highest priority ATC facilities and making further improvements later, thereby avoiding the increasing cost of maintaining the legacy system. However, this strategy provided for only limited evaluation by FAA and DOD controllers and maintenance technicians during the system’s development phase, although these employees were identified as stakeholders in developing the system’s requirements. While DOD controllers adopted and began using the original COTS version of STARS, FAA elected to modify the acquisition strategy and suspended the STARS deployment to address FAA controller and technician concerns with the new system. These concerns included, for example, that many features of the old equipment could be operated with knobs, allowing controllers to focus on the screen. By contrast, STARS was menu-driven and required the controllers to make several keystrokes and use a trackball, diverting their attention from the screen. The maintenance technicians also identified differences between STARS and its backup system that made it difficult to monitor the system. For example, the visual warning alarms and the color codes identifying problems were not the same for the two systems. According to FAA, the original COTS acquisition strategy that limited the involvement of controllers and maintenance technicians to just prior to deployment caused unplanned work for the agency because it had to revise its strategy for acquiring and approving STARS; this contributed to an increase in the overall cost of STARS of $500 million and a schedule extension of 5 years to deploy the system to its first site. The interaction of these factors also contributed to the agency’s ability to deploy STARS at only 47 of the 172 facilities initially planned. As of February 2005, FAA was developing a long-term acquisition plan to modernize or upgrade the highest-priority Terminal Radar Approach Control facilities that direct aircraft in the airspace that extends from the point where the tower’s control ends to about 50 nautical miles from the airport. The plan consists of alternatives to STARS, including the existing Common Automated Radar Terminal System (CARTS), which STARS was designed to replace. Finally, to help avoid similar problems in the future, stemming from the insufficient involvement of stakeholders during critical phases of a system’s design, development, and implementation, FAA has been more proactive in involving the stakeholders that will operate and maintain system acquisitions. A final example of how these factors can interact is FAA’s acquisition of OASIS, which is designed to replace outdated technology in FAA’s automated flight service stations. The new system is intended to improve the ability of air traffic specialists to process flight plans, deliver weather information, and provide search and rescue services to general aviation pilots. In August 1997, FAA awarded a contract to replace the Flight Service Automation System and console workstations. However, unplanned work, insufficient involvement of stakeholders, and lower funding than the agency had determined was needed to meet cost, schedule, and performance targets have together contributed to cost growth and schedules extensions. For example, the agency saw the system acquisition schedule slip because of a larger-than-planned development effort. According to the DOT IG, FAA identified a number of significant concerns, including the inadequate weather graphics capabilities for air traffic specialists. In our view, this indicates that stakeholders were not sufficiently involved throughout the system’s design and development phases. As a result, FAA eliminated the option of COTS procurement. In addition, the OASIS program was rebaselined in March 2000, when the system acquisition received only $10 million of the $21.5 million called for in its baseline for that year. This reduction in funding reduced the rate of software development, delayed and reduced the rate of planned hardware and console deployments, and led to the incremental deployment of operational software. This contributed to a delay in the first-site implementation from July 1998 to July 2002. According to FAA officials, because OASIS received less funding than the agency had approved for fiscal year 2004 and 2005, its deployment to automated flight service stations was postponed. As of February 2005, FAA had deployed 19 OASIS units: 16 at automated flight service stations and 3 at other sites. Software upgrades that are under way will be completed by June 2005. FAA plans neither installations nor software upgrades beyond those at the automated sites, because the agency awarded a contract to a private vendor in February 2005 to operate flight service stations. Until then, FAA has directed the program to remain within its current Capital Investment Plan funding levels for fiscal years 2004 through 2006. According to FAA, since it completed its evaluation of OASIS in February 2005, planning for the program’s implementation and baseline remain unchanged. FAA plans to phase out OASIS between March 2006 and March 2007 in accordance with the new service provider’s transition plan. Three of the 16 major ATC system acquisitions we reviewed in detail are currently operating within their original cost, schedule, and performance targets; however they have experienced challenges, including symptoms of one or more of the four factors cited earlier, such as requirements growth. These system acquisitions include (1) ECG, a communications system gateway that serves as the point of entry and exit for data used by FAA personnel to provide air traffic control at 20 en route facilities; (2) ERAM, a replacement for the primary computer system used to control air traffic; and (3) ATOP, an integrated system for processing flight data for oceanic flights. While ECG has not exceeded its original cost, schedule, and performance targets, it encountered requirements growth when FAA added a new capability to address a security weakness. According to FAA officials, correcting this weakness cost about $25,000, and an additional $480,000 will likely be needed to improve the monitoring capability for this system’s operation. However, these cost increases will not exceed the system’s cost or schedule targets. ERAM and ATOP also have areas that warrant attention. For example, ERAM is a high-risk effort because of its size and the amount of software that needs to be developed—over 1 million lines of code are expected to be written for this effort. In addition, the DOT IG reports that, to date, ERAM has experienced software growth of about 70,000 lines of code. While the DOT IG considers this amount of software growth to be modest, given FAA’s long-standing difficulties with developing this volume of software for system acquisitions while remaining within cost, schedule, and/or performance targets, sustained management attention is warranted. For ATOP, when FAA tried to accelerate the initial deployment of this system by 14 months, it was unable to do so, because of poorly defined requirements, unrealistic schedule estimates, and inadequate evaluation by the contractor. In addition, according to contract provisions, FAA assumed responsibility in February 2005 for the cost of resolving any additional software problems it identifies. Overall, although these system acquisitions are currently operating within their cost, schedule, and performance targets, the challenges they have experienced thus far indicate that they will require the sustained attention of FAA’s senior managers to help ensure that they stay on track. For the 39 system acquisitions that make up the balance of FAA’s ATC modernization program, only 9 are considered “major” or directly comparable to the 16 major ATC system acquisitions we reviewed in detail.(See table 3.) Of these 9 major systems, 2 have required changes in their cost targets. For example, for an automated weather observation system, the Aviation Surface Weather Observation Network,the cost has increased by 15 percent because of system capacity issues, among other things. For another system that will be used on an interim basis for managing air traffic until the new primary computer system is available, the Host and Oceanic Computer System Replacement, the cost has decreased by 13 percent because the agency determined that parts of the existing system could be sustained through fiscal year 2008, which is within the scope of the program.The remaining 30 systems are not directly comparable, because they do not involve acquiring a new system. Instead, they are what FAA terms “buy-it-by-the pound” purchases—systems that are commercially available and ready for FAA to use without modification, such as a landing system purchased to replace one that has reached the end of its useful life. (See app. II for additional information on these 39 systems.) To its credit, FAA has reported that it met its annual acquisition performance goal for fiscal year 2004--to meet 80 percent of designated milestones and maintain 80 percent of critical program costs within 10 percent of the budget as published in its Capital Investment Plan. Specifically, it set annual performance cost goals and schedule milestones for 41 of the 55 system acquisitions under the ATC modernization program. For these 41 system acquisitions, FAA set 51 schedule milestones and met 46 of them—with “meeting the goal” defined as achieving 80 percent of its designated program milestones. It also set and met its annual cost performance goals for each of these 41 system acquisitions. In our opinion, having and meeting such performance goals is commendable, but it is important to note that these goals are updated program milestones and cost targets, not those set at the program’s inception. Consequently, they do not provide a consistent benchmark for assessing progress over time. Moreover, as indicators of annual progress, they cannot be used in isolation to measure progress in meeting cost and schedule targets over the life of an acquisition. Finally, given the problems FAA has had in acquiring major ATC systems for over two decades, it is too soon to tell whether meeting these annual performance goals will ultimately improve the agency’s ability to deliver system acquisitions as promised. FAA has taken a number of positive steps, primarily through the ATO, to address key legacy challenges in acquiring major systems under its ATC modernization program; however, we have identified additional steps that are warranted to reduce risk and strengthen oversight. Some of the steps FAA has taken directly address the four factors we identified as contributing to cost, schedule, and/or performance problems, while others support more general efforts to improve the modernization program’s management. The steps taken and additional steps needed are discussed below by key areas. To address the concern that some system acquisitions have had difficulty meeting performance targets because they have not received annual funding at the levels called for in key planning documents, the ATO has taken several steps. For example, the ATO has demonstrated a willingness to cut major programs that were not meeting their performance targets even after a significant investment of agency resources. The ATO is currently reviewing all of its capital projects to reassess priorities. Both of these actions should help improve the chances that sufficient funding will be available for priority system acquisitions to conduct the annual activities necessary to keep them on track to meet cost, schedule, and performance targets. Specifically, for fiscal year 2005, the appropriation for FAA’s facilities and equipment budget, which funds the ATC modernization program, was $393 million less than the agency had planned to spend. FAA absorbed the $393 million reduction largely by cutting funding for three of the major system acquisitions we reviewed in detail: a digital e-mail-type capability between controllers and pilots was suspended (CPDLC); the next generation air-to- ground communication system had the funding cut for a major component (NEXCOM); and a precision-landing system augmented by satellites for use primarily by commercial airlines (LAAS) was returned to research and development to focus the remaining funding for the system on resolving a key performance shortfall. FAA also plans to defer funding for CPDLC and LAAS for fiscal year 2006. FAA decisions to cut or eliminate funding for system acquisitions in its current ATC modernization system may prove to be positive in the long run. For example, although FAA and National Air Traffic Controllers Association officials say that the cuts the agency made to 3 of its 16 major ATC system acquisitions will delay system benefits until the acquisitions are fully developed and deployed, the cuts demonstrate FAA’s willingness to suspend major ATC system acquisitions, despite large resource investments. In addition, by delaying a system acquisition, FAA may later be able to save time and money by leveraging the experiences that others have had with developing and deploying systems that provide similar capabilities (e.g., the controller-pilot e-mail-type capability for which FAA cut funding is now in use in both Canada and Europe). Furthermore, as FAA continues to reassesses its funding priorities, it could explore cost- saving options including taking steps to systematically (1) evaluate the costs and benefits of continuing to fund system acquisitions across the ATC modernization program at current and planned levels to identify potential areas for savings and (2) identify potentially lower-cost alternatives to current system acquisitions, such as lower-cost controller workstations. FAA has also taken a number of steps to address two other factors—reduce the risk of requirements growth and/or the need to undertake unplanned work—and to improve its ability to better assess and manage the risks associated with acquiring major ATC systems that require complex software development. However, additional steps are needed in these areas. Processes for acquiring software and systems: FAA has made progress in improving its process for acquiring software-intensive systems-- including establishing a framework for improving its system management processes, and performing many of the desired practices for selected FAA projects. The quality of these systems and software, which are essential to FAA’s ATC modernization program, depends on the value and maturity of the processes used to acquire, develop, manage, and maintain them. In response to our previous recommendations, FAA developed an FAA-integrated capability maturity model (iCMM). Since FAA implemented the model, a growing number of system acquisitions have adopted the model, and its use has paid off in enhanced productivity, higher quality, greater ability to predict schedules and resources, better morale, and improved communication and teamwork. However, ATO did not mandate the use of the process improvement model for all software-intensive acquisition projects. In response to our recommendation, the ATO informed us of its plans to establish, by June 30, 2005, an overall policy defining the ATO’s expectations for process improvement, and by September 30, 2005, a process improvement plan to address and coordinate improvement activities throughout the organization. Management of information technology investments: In 2004, we reported that FAA has made considerable progress in managing its information technology investments.However, we also found that FAA’s lack of regular review of investments that are more than 2 years into their operations is a weakness in the agency’s ability to oversee more than $1 billion of its information technology investments as a total package of competing investment options and pursue those that best meet the agency’s goals. FAA recently informed us that it has taken a number of steps aimed at achieving a higher maturity level, including establishing service-level mission need statements and service-level reviews, which address operational systems to ensure that they are achieving the expected level of performance. While these steps could resolve some of the deficiencies that we previously reported, we have not yet performed our own evaluation of these steps. FAA could potentially realize considerable savings or performance improvements if these reviews result in the discontinuation of some investments, since operating systems beyond their second year of service accounted for 37 percent of FAA’s total investment in information technology in fiscal year 2004. Enterprise architecture: FAA has established a project office to develop a NAS enterprise architecture—a blueprint for modernization—and designated a chief architect, and has committed resources to this effort, and issued its latest version of its architecture. However, FAA has not yet taken key steps to improve its architecture development, such as designating a committee or group representing the enterprise to direct, oversee, or approve the architecture; establishing a policy for developing, maintaining, and implementing the architecture; or fully developing architecture products that meet contemporary guidance and describe both the “As Is” and “To Be” environments and developing a sequencing plan for transitioning between the two. To help address concerns that stakeholders have not been sufficiently involved throughout the development of major systems acquisitions, FAA has taken a number of steps. For example, when the ATO was created, it brought together the FAA entities that develop systems and those who will ultimately use them. Specifically, it reorganized FAA’s air traffic services and research and acquisition organizations along functional lines of business to bring stakeholders together and integrate goals. The ATO is also continuing with a phased approach to system acquisitions that it began using under Free Flight Phase 1, through which it has begun to involve stakeholders more actively throughout a system acquisition’s development and deployment. However, as we reported in November 2004, FAA needs to take additional steps to ensure the continued and active involvement of stakeholders in certifying new ATC system acquisitions. In addition, the union that represents the specialists who install, maintain, troubleshoot, and certify NAS systems, recently testified that over the past 2 years, FAA has systematically eliminated the participation of these specialists in all but a few modernization programs. Given the importance of stakeholder involvement in the development and deployment of new ATC systems, their continued involvement in ATC modernization efforts will be important to help avoid the types of problems that led to cost growth and delays for STARS. Reassessment of capital investment to decrease operating costs: Both the FAA Administrator and the ATO’s chief operating officer have committed to basing future funding decisions for system acquisitions on their contribution to reducing the agency’s operating costs while maintaining safety. This is consistent with our 2004 recommendation that FAA consider its total portfolio of investments as a package of competing options. Currently, only 1 of the 55 system acquisitions in FAA’s ATC modernization program—FAA Telecommunications Infrastructure—helps to reduce the agency’s operating costs. Most of FAA’s major system acquisitions are aimed at increasing the capacity of the NAS and delivering benefits to system users. The ATO is in the process of reviewing all of its capital investments, including system acquisitions under the ATC modernization program, to identify areas of cost savings and to focus limited funding on investments that will reduce operating costs. However, because FAA has only recently begun to incorporate this type of analysis of the costs and operational efficiency of system acquisitions into the decision-making and management processes, it is too early to assess the results. Acquisition Management System: The ATO has taken a number of steps to improve its Acquisition Management System (AMS). For example, it has revised AMS to require that acquisition planning documents be prepared in a format consistent with that prescribed by OMB for use in justifying all major capital investments. In addition, the ATO revised AMS in December 2004, in part to respond to recommendations we made about needed changes in its investment management practices for information technology. However, we have not yet independently assessed the sufficiency of these changes. Moreover, additional changes to AMS are warranted. For example, while AMS provides some discipline for acquiring major ATC systems, it does not use a knowledge-based approach to acquisitions, characteristic of best commercial and DOD practices. A knowledge-based approach includes using established criteria to attain specific knowledge at three critical junctures in the acquisition cycle, which we call knowledge points, and requiring oversight at the corporate executive level for each of these knowledge points. Experience has shown that not attaining the level of knowledge called for at each knowledge point increases the risk of cost growth and schedule delays. We recommended, among other things, that FAA take several actions to more closely align its acquisition management system with commercial best practices. FAA said that our recommendations would be helpful to them as they continue to refine this system. Cost accounting and cost estimating practices: FAA has improved its financial management by moving forward with the development of a cost accounting system, which it plans to fully deploy by 2006. Ultimately, FAA plans to use this cost information routinely in its decision-making. When implemented, this cost accounting system will address a long-standing GAO concern that FAA has not had the needed cost accounting practices in place to effectively manage software-intensive investments, which characterize many of agency’s major ATC system acquisitions. This type of information can be used to improve future estimates of cost for these acquisitions. Organizational culture: FAA has also sought to establish an organizational culture that supports sound acquisitions. We have ongoing work to assess FAA’s efforts concerning cultural change. ATO business practices: To improve its investment management decision- making and oversight of major ATC acquisitions, the ATO has informed us that it has initiated the following steps, which we have reported are important to effective oversight: integrated AMS and OMB’s Capital Planning and Investment Control Process to develop a process for analyzing, tracking, and evaluating the risks and results of all major capital investments made by FAA; conducted Executive Council reviews of project breaches of 5 percent in cost, schedule, and/or performance to better manage cost growth; issued monthly variance reports to upper management to keep them apprised of cost and schedule trends; and increased the use of cost monitoring or earned value management systems to improve oversight of programs. However, much work remains before the ATO will have key business practices in place. Specifically, according to the ATO’s chief operating officer, it will be at least 2 years before the ATO has completed the basic management processes needed to use the new financial management systems it has been putting in place. Despite progress to date, until the agency addresses the residual issues cited above, it will continue to risk the project management problems affecting cost, schedule, and/or performance that have hampered its ability to acquire systems for improving air traffic control. The ATO will be further challenged to modernize the ATC system in the current constrained budget environment and remain within the administration’s future budget targets, which are lower than those of recent years. Specifically, for fiscal year 2005, FAA requested $393 million less than it had planned to spend for activities under the facilities and equipment budget account, which funds the ATC modernization program and related modernization activities. In addition, the President’s fiscal year 2006 budget submission calls for an additional cut to this budget account of $77 million from FAA’s planned level, which would bring the fiscal year 2006 funding level to about $470 million below the fiscal year 2004 appropriation. Moreover, FAA officials told us that funding for the facilities and equipment account is likely to hold near fiscal year 2004 levels, or at about $2.5 billion annually, for the next 5 years. In total, FAA plans to spend $4.4 billion during fiscal years 2005 through 2009 on key modernization efforts, despite FAA receiving about $2 billion less than it had planned in appropriations over this 5-year period for its facilities and equipment budget, which funds the ATC modernization program and related modernization activities. To fund its major system acquisitions while remaining within the administration’s budget targets, the ATO has eliminated planned funding to start new projects and substantially reduced planned funding for other areas. These funding decisions are reflected in FAA’s updated Capital Investment Plan. This plan shows substantially reduced funding for two major system acquisitions in fiscal year 2005—CPDLC and LAAS--and defers funding for them in fiscal year 2006. For the remaining 14 major ATC system acquisitions we reviewed in detail, FAA plans to increase funding by $533 million between fiscal year 2005 and fiscal year 2009. In contrast, for the remaining 39 system acquisitions, FAA has reduced funding by $420 million for this period. The planned increases in funding for these 14 major system acquisitions also come at the expense of other modernization activities outside the ATC modernization program, such as capital expenditures to replace aging ATC facilities that will house the system acquisitions. For example, FAA reports that it needs $2.5 billion (2005 dollars) annually to renew its aging physical infrastructure—assuming a $30 billion value of its assets and a 7- to 12-year useful life. According to the ATO, much of its physical infrastructure, including the buildings and towers that house costly ATC systems, is over 30 years old and needs to be refurbished or replaced. However, FAA plans to reduce funding for facilities by nearly $790 million between fiscal year 2005 and fiscal year 2009—a plan that runs counter to its reported need to refurbish or replace its physical infrastructure. Furthermore, FAA also plans to cut $1.4 billion from its spending plans for fiscal years 2005 through 2009 for, among other things, new system acquisitions in the ATC modernization pipeline that do not yet have agency-approved cost, schedule, and performance targets or baselines (e.g., a new technology that would allow pilots to “see” the location of other aircraft on cockpit display). Our work has shown that FAA has taken some important steps to prioritize the 55 system acquisitions under its ATC modernization program. These revised priorities are reflected in its most recent plans, which detail the areas where FAA plans to make cuts within its facilities and equipment budget to live within its expected means during fiscal years 2005 through 2009. However, our work has also shown that these plans do not provide detailed information about the trade-offs that are underlie decisions to fully fund some systems and to defer, reduce, or eliminate funding for others and how these cuts will affect FAA’s modernization efforts, including what impact they will have on interdependent system acquisitions. To convey information to decision-makers on the impact of reduced funding on modernization, the ATO should detail its rationale and explicitly identify the trade-offs it is making to reach the administration’s budget targets, highlighting those programs slated for increased funding and those slated for reduced funding. Key information includes delayed benefits, the impact of cutting one ATC system acquisition on related or interdependent systems, and increased costs for maintaining legacy systems until new systems are deployed. Overall, the ATO needs to explicitly identify the implications of deferring, reducing, or cutting funding for a particular system or activity on the agency’s ability to modernize both the ATC system and related components of the NAS in the near, mid, and longer term. While funding deferrals, reductions, and cuts to ATC system acquisitions and related activities in FAA’s facilities and equipment budget may be beneficial and necessary in the long run, it is important for senior agency, department, OMB, and congressional decision-makers to have complete information to make informed decisions about the trade-offs that are being made when they consider annual budget submissions. As part of our research, we sought the perspective of an international group of experts, who also suggested that the ATO should provide the administration and Congress with detailed information in its budget submissions about the impact of reduced budgets on both ATC and NAS modernization. These experts were a part of an international panel of aviation experts we convened to address, among other issues, how federal budget constraints have affected ATC modernization and what steps the ATO could take in the short term to address these constraints. For example, aviation experts emphasized the need for the ATO—which is now the organizational entity responsible for acquiring ATC systems—to prioritize its capital investments, as well as its investment in operating systems, with affordability in mind. These experts believe that the ATO needs to review all of its spending plans for modernization, determine which programs can realistically be funded, and select programs to cut. Moreover, they indicated that the ATO should have a mechanism to explain to Congress the implications that cutting one system has on other systems. For example, according to one of these experts, the current budget process tears apart a highly layered, interdependent system and does not reveal synergies between projects. Then, when the budget request goes to Congress, he said, “you have no opportunity to try to explain to anybody the interconnections of these programs.” As a result, when the appropriators decide not to fund a project, they may not understand how their decision will affect other projects. The constrained budgetary environment makes it more important than ever for FAA to meet cost, schedule, and performance targets for each of the major ATC systems it continues to fund and to ensure that related activities, such as those to refurbish or replace the buildings that house ATC modernization systems, receive sufficient funding. The need for FAA to accommodate a 25 percent increase in demand for air travel over the next decade underscores the importance of these efforts. FAA has demonstrated a commitment to live within its expected means during fiscal years 2005 through 2009 by setting priorities among its ATC system acquisitions and identifying areas where it plans to cut funding. However, without detailed information about the trade-offs that underlie decisions to fully fund some systems and to defer, reduce, or eliminate funding for others, FAA’s plans do not allow senior agency, department, OMB, and congressional decision-makers to assess the implications of approving annual budget submissions for the ATC modernization program and related modernization activities that support more comprehensive efforts to modernize the NAS. To help ensure that key administration and congressional decision-makers have more complete information to assess the potential impact of annual budget submissions on individual ATC system acquisitions, the overall ATC modernization program, and related larger-scale NAS modernization activities funded through the facilities and equipment budget, we recommend that the Secretary of Transportation direct FAA to identify which activities under the ATC modernization program have had funding deferred, reduced, or eliminated and to provide detailed information about the impact of those decisions on FAA’s ability to modernize the ATC system and related components of the NAS in the near, mid, and longer term. This information should be reported to Congress annually. We provided a copy of our draft report to DOT for review and comment. The draft was reviewed by officials throughout DOT and FAA, including the Vice President for Acquisition and Business Service. These officials provided comments through email. They generally agreed with the report and provided technical comments on specific aspects of the report, which we incorporated as appropriate. The FAA officials said they are continuing to consider our recommendation and indicated they would provide a response to it as required by 31 U.S.C. §720. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of Transportation, and the Administrator, FAA. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please call me at (202) 512-2834 if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Modular and expandable, ERAM will replace software and hardware in the host computers at FAA’s 20 en route air traffic control centers, which provide separation, routing, and advisory information. ERAM’s flight data processing capabilities will provide flexible routing around restrictions, such as congestion and weather. It will improve surveillance by increasing the number and types of surveillance sources, such as radars. ERAM will provide safety alerts to prevent aircraft collisions and congestion. ERAM has not breached schedule or cost parameters, but it remains a high- risk program because of its size and its amount of software code (more than 1 million lines). The contractor has reported that engineering costs are rising because of lower productivity than originally planned and an increase in the number of lines of software code. According to FAA officials, the contractor’s management reserve can absorb additional software development costs. We examined (1) FAA’s experience in meeting cost, schedule, and/or performance targets for major system acquisitions under its ATC modernization program, (2) the steps FAA has taken to address long- standing challenges with the ATC modernization program and additional steps that are needed, and (3) the potential effects of the constrained budget environment on FAA’s ability to modernize the ATC system. To address the first objective, we selected 16 of the 55 system acquisitions in the ATC modernization program to review in detail. We selected these 16 systems in July 2004, when this review was still a part of our broader work on FAA’s efforts to modernize the National Airspace System (NAS).Specifically, we selected the 16 ATC system acquisitions with the largest life-cycle costs that met the following criteria: each system had cost, schedule, and/or performance targets, was discussed in our prior and Department of Transportation Inspector General reports, had not been fully implemented or deployed by 2004, and received funding in 2004. We reviewed this list with FAA officials to ensure that we did not exclude any significant system. (See app. I for additional information on these 16 systems.) FAA does not have a formal definition of major systems under its Acquisition Management System; however, agency officials told us that if a system acquisition has a formally approved baseline, we could consider it “major.” Using this definition, we determined that 25 of the 55 system acquisitions under the ATC modernization program are major. The remaining 30 system acquisitions are generally what FAA refers to as buy-it- by-the-pound systems that are commercially available and ready to use without modification, such as those to replace a system that has reached the end of its useful life. For fiscal year 2005, the 55 systems accounted for about 55 percent of FAA’s facilities and equipment (F&E) budget, or $1.38 billion of the $2.52 billion appropriated for the F&E budget. The 16 major systems accounted for 36 percent ($917.3 million), and the other 39 system acquisitions accounted for about 19 percent ($460 million). The remaining 45 percent of the F&E budget will be spent on facilities, mission support, and personnel- related activities ($1.14 billion). To assess the 16 major system acquisitions, we relied largely on data collected from FAA and contracting officials for two engagements we issued in November 2004 on FAA’s acquisition and certification processes. In turn, we updated this information and collected data on the remaining 39 systems under the modernization program, primarily through interviews with FAA officials and analyses of the data they provided, including key acquisition documents. (See app. II for additional information on these 39 system acquisitions.) In addition, we reviewed our past reports and those of the Department of Transportation’s Inspector General. Furthermore, we interviewed FAA officials within the recently created ATO and collected and analyzed the documents they provided. We also interviewed officials with the Aircraft Owners and Pilots Association, Air Transport Association, Department of Defense, National Air Traffic Controllers Association, and RTCA. Furthermore, we convened a panel of international aviation expertsto obtain their views on, among other things, the factors that have affected the cost, schedule, and/or performance of FAA’s ATC modernization program. In addition, we assessed the reliability of FAA’s cost and schedule estimates. Through interviews with FAA officials about their data system and quality controls, we determined that the cost and schedule estimates were appropriate for use in our report. Specifically, the estimates are sufficiently authoritative, appropriate, and reliable to allow us to use them without conducting any further assessment. The estimates appear to be based on reasonable assumptions. Our review did not focus on FAA’s efforts to modernize its facilities. To address the second objective, we interviewed FAA officials, primarily within the recently created ATO, and collected and analyzed the documents they provided. We also interviewed officials with the Aircraft Owners and Pilots Association, Air Transport Association, Department of Defense, National Air Traffic Controllers Association, and RTCA. We also reviewed past GAO reports and those of the Department of Transportation’s Inspector General. In addition, we obtained the views of the international aviation experts who participated in our panel on what steps the ATO could take in the short term to address the factors that have affected the cost, schedule, and/or performance of FAA’s ATC modernization program. To address the third objective, we interviewed officials within FAA’s ATO and obtained and analyzed data on FAA’s capital investments and annual budgets. We also interviewed officials with other organizations cited above. In addition, we obtained the views of the international aviation panelists on how federal budget constraints have affected ATC modernization and what steps the ATO could take in the short term to address these constraints. We conducted our review from November 2004 through May 2005 in accordance with generally accepted government auditing standards. In addition to the person named above, Beverly L. Norwood, Tamera Dorland, Seth Dykes, Elizabeth Eisenstadt, Brandon Haller, Bert Japikse, Maren McAvoy, and Ed Menoche made key contributions to this report. | The Federal Aviation Administration's (FAA) multibillion-dollar effort to modernize the nation's air traffic control (ATC) system has suffered from cost, schedule, and/or performance shortfalls in its system acquisitions for more than two decades and has been on our list of high risk programs since 1995. FAA's performance-based Air Traffic Organization (ATO) was created in February 2004, in part, to address these legacy challenges. In this report, GAO examined (1) FAA's experience in meeting cost, schedule, and performance targets for major ATC system acquisitions; (2) steps taken to address legacy problems with the program and additional steps needed; and (3) the potential impact of the constrained federal budget on this program. The ATO met its acquisition goal for fiscal year 2004. However, prior to the establishment of the ATO, FAA had experienced more than two decades of cost, schedule, and/or performance shortfalls in acquiring major systems under its ATC modernization program. For example, 13 of the 16 major system acquisitions that we reviewed in detail have experienced cost, schedule, and/or performance shortfalls when assessed against their original milestones. These 13 system acquisitions experienced total cost growth from $1.1 million to about $1.5 billion; schedule extensions ranging from 1 to 13 years; and performance shortfalls, including safety problems. We found that one or more of four factors--funding, requirements growth and/or unplanned work, stakeholder involvement, and software complexity--have contributed to these legacy challenges. While FAA met its recent acquisition goal, it is important to note that this goal is based on updated program milestones and cost targets for system acquisitions, not those set at their inception. Consequently, they do not provide a consistent benchmark for assessing progress over time. Also, as indicators of annual progress, they cannot be used in isolation to measure progress over the life of an acquisition. Although additional steps are warranted, FAA has taken some positive steps to address key legacy challenges it has had with acquiring major systems under the modernization program. For example, the ATO has cut funding for some major systems that were not meeting their goals and is reassessing all capital investments to help ensure that priority systems receive needed funding. The ATO has improved its management of software-intensive acquisitions and information technology investments and begun to more actively involve stakeholders. As we recommended, the ATO plans to establish an overall policy to apply its process improvement model to all software-intensive acquisitions. However, additional steps could be taken to improve its management of system acquisitions. For example, the ATO could use a knowledge-based approach to managing system acquisitions, characteristic of best commercial practices, to help avoid cost, schedule, and performance problems. The ATO will also be challenged to modernize the ATC system under constrained budget targets, which would provide FAA with about $2 billion less than it planned to spend through 2009. To fund its major system acquisitions and remain within these targets, the ATO has eliminated planned funding to start new projects and substantially reduced planned funding for other areas. However, when forwarding its budget submission for review by senior officials at FAA, DOT, the Office of Management and Budget, and Congress, the ATO provides no detail on the impact of the planned funding reductions on ATC modernization and related activities to modernize the NAS. Our work shows that the ATO should provide these decision-makers with detailed information in its budget submissions about the impact of funding decisions on modernization efforts. Without this type of information, decision-makers lack important details when considering FAA's annual budget submissions. |
The value of DOD inventory requirements needed to support acquisition leadtime grew from about $8 billion in 1979 to about $21 billion in 1989. Recognizing that excessively long acquisition leadtime was a major contributor to the large growth in defense inventories in the 1980s, in May 1990 DOD directed the military services and DLA to take a number of initiatives to reduce acquisition leadtime as a part of a 10-point Inventory Reduction Plan. The recommended initiatives included (1) establishing procurement leadtime reduction goals, (2) shortening production leadtimes by gradually reducing the required delivery dates in contract solicitations, and (3) expanding multiyear contracting and indefinite quantity requirements contracts. Similar policy guidance for reducing acquisition leadtime, except for establishing reduction goals, was included in DOD Material Management Regulation 4140.1-R, dated January 1993. The leadtime reduction initiatives were based on a December 1986 DOD memorandum that included the recommendations of a study performed for DOD by the Logistics Management Institute. The DOD memorandum and the Institute study showed that a 25-percent reduction in leadtime was achievable by adopting methods proven successful in the private sector. In stressing the significance of the initiatives, DOD commented that each day the DOD-wide average leadtime is reduced future purchases can be reduced by $10 million. Since 1990, DOD has had only limited success in achieving the 25-percent reduction indicated by the study. As shown in table 1, DOD’s average leadtime decreased by about 9 percent. On the basis of DOD’s estimate that $10 million can be saved for each day the average leadtime is reduced, the 56-day leadtime reduction resulted in procurement savings of $560 million. A further leadtime reduction of 91 days will be needed to achieve the 25-percent reduction indicated by the study. Such a reduction would result in additional procurement savings of $910 million. None of the DOD components have fully implemented DOD’s 1990 leadtime reduction initiatives or its 1993 policy guidance for reducing leadtime, but some have made greater efforts than others. As shown in table 1, the Navy had the greatest success and the Air Force had the least success in reducing acquisition leadtime. From 1990 to 1994, the Navy reduced the overall average acquisition leadtime by 193 days, or about 27 percent. This was accomplished by a number of actions. In accordance with DOD initiatives, the Navy first established a leadtime reduction goal of 25 percent. The Navy then had the inventory control points reduce the leadtimes shown in their databases by 25 percent for each item managed. Finally, the Navy took aggressive action over the next 4 years to shorten required delivery dates in contract solicitations and negotiations. From 1990 to 1994, the Army’s average acquisition leadtime decreased by 21 days, or about 3 percent. Unlike the Navy, the Army did not establish a leadtime reduction goal, nor did it take action to obtain leadtime reductions through contract solicitations and negotiations. Instead, the Army emphasized another of DOD’s initiatives to reduce leadtime by using more flexible procurement methods such as multiyear procurements and indefinite quantity type contracts. According to Army officials, quantities for follow-on years can be easily added to multiyear and indefinite quantity type contracts, which will reduce administrative leadtime to a matter of days instead of months. Also, delays in starting up production are minimized. As an example of the impact of these types of contracts, in 1993 the Army reported that a 3-year vehicle roadwheel purchase by the Tank-Automotive Command reduced acquisition leadtime by 13 months (7 months’ administrative and 6 months’ production) resulting in a savings of about $19 million. Similarly, by using an indefinite quantity type contract to purchase sprockets, this command reduced acquisition leadtime by 15 months and saved about $5 million. From 1990 to 1994, the Air Force’s average acquisition leadtime increased by 6 days, or about 1 percent. The Air Force did not implement DOD’s 1990 leadtime reduction initiatives because it felt that no action was needed to reduce leadtime based on a comparison with the leadtimes of the Navy. The Air Force delayed implementation of the initiatives pending an evaluation of the Navy’s reported success in achieving a 25-percent decrease in production leadtime without degrading mission support. In its evaluation, the Air Force compared aviation data due to the similarity of parts. On the basis of this evaluation, which was completed in December 1993, the Air Force concluded that its production leadtimes for both repairable and consumable aviation parts were lower than the Navy’s leadtimes, even after the 25-percent reduction. The Air Force, therefore, concluded that no action was needed to reduce production leadtime. We analyzed and compared leadtime data for the Air Force and the Navy as shown on their latest available inventory stratification reports of March 31, 1993, and September 30, 1993, respectively. We found that the Air Force’s production leadtime was lower for consumable parts, but considerably higher for repairable parts. The Air Force’s average production leadtime for repairable parts of 596 days was 176 days, or about 42 percent, higher than the Navy’s leadtime of 420 days. Also, the Air Force’s overall average acquisition leadtime of 818 days for repairable parts was 299 days, or 58 percent, higher than the Navy’s acquisition leadtime of 519 days. From 1990 to 1994, DLA’s average acquisition leadtime decreased by 16 days, or about 5 percent. DLA did not establish a leadtime reduction goal or attempt to reduce leadtime through contract solicitations and negotiations, as recommended by DOD’s leadtime reduction initiatives. Instead, DLA concentrated on various initiatives to automate the procurement source selection process and on increased use of long-term contracting techniques, such as indefinite quantity type contracts. As the result of a study by its supply centers that identified the potential for shorter leadtimes for high dollar, high demand, long leadtime items, in February 1994 DLA drafted proposed policy guidance for implementing acquisition leadtime reduction initiatives. The proposed policy would require the supply centers to reduce leadtime by 30 percent over a 2-year period from a base of fiscal year 1992 (a reduction of 86 days). To accomplish this reduction, the supply centers would request shorter delivery times in contract solicitations, consider shorter production leadtimes as a factor in competitive bid evaluations, and periodically validate and update production leadtimes through market surveys. As of October 1994, DLA had not implemented the proposed policy, pending its decision to incorporate the policy as a part of a broader business plan it was developing. With the exception of the Navy, the military services and DLA placed no timely emphasis on the effective implementation of DOD’s 1990 leadtime reduction initiatives or its 1993 leadtime reduction policy. Also, DOD was not aware of the general lack of progress made over the past 4 years in reducing leadtime because of an absence of adequate oversight information. The Navy’s success in reducing leadtime by 27 percent in comparison to the limited progress made by the other DOD components shows that DOD can benefit by placing renewed emphasis on effective implementation of the leadtime reduction initiatives. One way would be to focus on the Navy’s success in establishing a 25-percent reduction goal and achieving that goal by taking aggressive action to reduce production leadtime in contract solicitations and negotiations. DOD was not aware of the general lack of progress in implementing the initiatives because the annual progress reports required of the military services and DLA did not provide sufficient oversight information to make a meaningful assessment. The reports did not show historical trends in leadtime days before and after the 1990 initiatives. Also, the reports did not provide any meaningful statistics showing the extent of implementation. For example, Army and DLA reports stated that an expansion of multiyear procurements was a primary means of reducing leadtime, but the reports did not provide statistics showing the extent of the expansion. We identified additional opportunities for significant reductions in acquisition leadtime that were overlooked by the DOD initiatives. These opportunities are having inventory management activities (1) periodically validate recorded leadtime data, (2) work closely with major contractors to update old leadtime data for items with long production leadtimes (e.g., over 18 months), and (3) consider potential reductions in leadtime as a factor in deciding whether to purchase spare parts through the prime contractor or directly from the actual manufacturer. We reviewed the accuracy of acquisition leadtimes at the Air Force’s Oklahoma City and San Antonio Air Logistics Centers and the Army’s Aviation and Troop Command and found that the Army’s leadtimes were more accurate. The Army command had a higher accuracy rate than the centers because it had recently worked closely with eight major contractors to update production leadtimes for all items with leadtimes of 18 months or longer. As a result, leadtime changes were made for 1,129 items, or 75 percent of the items reviewed. Leadtime decreases accounted for 1,061, or 94 percent of the changes. The command estimated net annual procurement savings of $88 million from using updated leadtimes to compute buy requirements. Although the Army command reduced leadtimes, our review still identified inaccuracies. We tested 26 items and found that the leadtimes for 5 items, or 19 percent, were inaccurate. For example, in July 1994 the Aviation and Troop Command used an administrative leadtime of 9 months in the requirement computation for a rotor blade tip used on the UH-60 Black Hawk helicopter (NSN 1560-01-331-3845). However, procurement history records showed that the administrative leadtime required to process the last two purchases was only 2 months. The item manager told us that the 9-month administrative leadtime was based on the time it took to award a multiyear contract and that the 2 months’ administrative leadtime represented the time it took to place orders against the contract. The 2-month administrative leadtime should have been used in making purchasing decisions because it represents the actual ordering time to acquire additional parts once a multiyear contract is awarded. Command officials agreed that an adjustment should be made in the requirements system for the reduced leadtime. The two Air Force air logistics centers had a higher percentage of leadtime inaccuracies than the Army command. We reviewed the accuracy of acquisition leadtimes for 106 items and found that leadtimes for 53 items, or 50 percent, were inaccurate, resulting in overstated requirements of $7.3 million. These inaccuracies resulted from the failure to periodically validate and update leadtime data in the requirement computation database. The following examples illustrate the leadtime inaccuracies found. In November 1993, the Oklahoma City Air Logistics Center was using a production leadtime of 44 months in the requirement computation for a circuit card used on the B-2 bomber (NSN 5998-01-262-8124FW). Procurement history records showed that the 44 months was based on information provided by the contractor in July 1991. We asked center officials to contact the contractor to verify the accuracy of the leadtime. According to the officials, the contractor stated that the 44-month leadtime was outdated and quoted a current leadtime of 25 months. The 19-month reduction in production leadtime caused the value of requirements for this item to be reduced by $69,962. The circuit card is one of six B-2 bomber sample items with old and long leadtimes that the contractor updated. As a result, the Oklahoma City Air Logistics Center reduced leadtimes by an average of 14 months for five items, thus deferring future purchases. In another case, the San Antonio Air Logistics Center was using an acquisition leadtime of 100 months in the requirement computation for a signal generator used on the F-15 aircraft (NSN 6625-01-051-6832DQ). In response to our inquiries, the item manager said a keypunch error had occurred in March 1993 during file maintenance and corrected the acquisition leadtime to 38 months. Correcting the leadtime reduced the value of requirements and budget estimates for this item by $408,857. DOD promotes the purchase of spare parts from actual manufacturers rather than from prime contractors as a way to increase competition. This process is called spare parts breakout and is recognized as an effective means of achieving price reductions. Spare parts breakout has the added benefit of reducing acquisition leadtime by eliminating the processing time that a prime contractor adds for passing an order to the actual manufacturer. As part of the inventory reduction plan initiatives, the Army undertook a major program to breakout spare parts from the prime contractor for direct purchase from the actual manufacturer. Although the intent of this program was to bring about procurement economies through elimination of middleman profits, the program also contributed to a reduction in procurement leadtime. In the 1993 progress report on inventory reductions, the Army reported that the inventory commands had screened about 12,000 items for breakout in fiscal year 1992 and identified approximately 6,000 items for breakout from the prime contractor. At the Aviation and Troop Command, for example, the purchase of spare parts for the Blackhawk helicopter had been almost completely broken out. The program manager told us that in his experience production leadtime always goes down, often times by half, when a spare part is broken out for direct purchase from the actual manufacturer. Additional opportunities to buy directly from manufacturers continue to exist. For example, in response to our inquiries on six sample items managed by the Air Force’s Oklahoma City Air Logistics Center, the prime contractor for the B-2 bomber advised the center that it was not the actual manufacturer for five of the six items. The contractor stated that it added 5 months’ leadtime to process the Air Force’s order to the actual manufacturer. Center officials agreed that the leadtime to acquire these items could be reduced simply by buying from the actual manufacturer instead of from the prime contractor and informed us that the next purchases would be made directly from the manufacturer. We recommend that the Secretary of Defense direct the Secretaries of the Army and the Air Force and the Director of DLA to place renewed emphasis on implementing the DOD leadtime reduction initiatives and to improve oversight information reported to DOD so that the progress being achieved can be measured. In doing so, we recommend that the other military services and DLA follow the Navy’s lead in setting a leadtime reduction goal and achieving this goal through contract solicitations and negotiations. We also recommend that the Secretary of Defense direct the Secretaries of the Army, the Navy, and the Air Force and the Director of DLA to have their inventory management activities periodically validate recorded leadtime data to detect and correct errors, work closely with major contractors in updating old leadtime data for items with long production leadtimes (e.g., over 18 months), and consider potential leadtime reductions as a factor in evaluating the feasibility of buying directly from manufacturers instead of from prime contractors. DOD agreed that further action to reduce acquisition leadtimes is required (see app. I). However, DOD views full implementation of the policy guidance on methods of reducing leadtimes included in DOD Material Management Regulation 4140.1-R, dated January 1993, as the most effective means to accomplish this reduction. DOD stated that the military services and DLA would be reminded of the need to fully implement that guidance. In a November 23, 1994, memorandum to the military services and DLA, DOD stated that renewed emphasis on acquisition leadtime reduction was appropriate. The memorandum stated that while the greatest emphasis should be placed on full implementation of the guidance in the DOD regulation, such as gradually reducing required delivery dates in solicitations, consideration should be given to the usefulness of leadtime reduction goals and the importance of periodically validating recorded leadtime data. The memorandum also stated that full implementation of the spare parts breakout program could help reduce leadtime and that contractor furnished data could be a useful source of information in validating leadtime data. DOD asked to be advised of the actions taken to reduce leadtimes by February 15, 1995. With regard to our reference to additional savings of $910 million from further leadtime reductions leading to a DOD-wide average reduction of 25 percent, DOD commented that the Secretary of Defense issued a memorandum dated September 14, 1994, that challenges DOD components to reduce business-process cycle times by at least 50 percent by the year 2000. DOD stated further that application of this challenge to acquisition leadtime will include an estimate of possible savings. While DOD’s actions are constructive, we do not believe that relying on the military services and DLA to fully implement the January 1993 policy guidance is the most effective means of achieving a 25-percent reduction in acquisition leadtime. The guidance already has been in effect for almost 2 years, and our report points out that only the Navy has been successful in reducing leadtime by 25 percent since 1990. At that time, DOD directed the military services and DLA to take a number of initiatives to reduce acquisition leadtime that are similar to those in the January 1993 guidance. Also, the guidance does not contain a leadtime reduction goal. Furthermore, we believe that improved oversight is needed if leadtime reductions are to be achieved. DOD’s comments do not address this part of our recommendation and the January 1993 guidance does not require the military services and DLA to provide DOD with oversight information on their progress in reducing leadtimes. Also, DOD no longer requires annual reports from the military services and DLA showing their progress in implementing the 1990 inventory reduction plan. Alternative means are available for providing DOD with oversight information. One way would be to require that the military services and DLA include leadtime data in their annual Defense Business Operations Fund budget submissions to DOD. These submissions could show the progress being made in achieving a 25-percent reduction in acquisition leadtime, using fiscal year 1990 as the base year for measuring progress. To evaluate the effectiveness of DOD’s leadtime reduction initiatives, we held discussions and collected information at headquarters of DOD, Army, Navy, Air Force, and DLA, Washington, D.C.; the Oklahoma City Air Logistics Center, Tinker Air Force Base, Oklahoma; the San Antonio Air Logistics Center, Kelly Air Force Base, Texas; and the Army Aviation and Troop Command, St. Louis, Missouri. We reviewed DOD guidance and initiatives for managing acquisition leadtimes and the implementing policies, procedures, and practices of the military services and DLA. To determine if additional leadtime reduction opportunities exist, we obtained computer tapes from the Air Force and the Army that identified acquisition leadtimes for all spare parts managed by the two Air Force air logistics centers and the Army command as of March 31, 1993. From data extracted from the tapes, we selected 106 Air Force items and 26 Army items for review. These items represented a mix of items either planned to be bought in fiscal year 1995 or having long leadtimes of more than 50 months. We compared leadtime estimates used in requirement computations to leadtimes actually experienced and other leadtime information in item manager files. We selected Air Force and Army locations for detailed review because of their large acquisition leadtime requirements. We used the same computer programs, reports, records, and statistics DOD, the military services, and DLA use to manage inventories, make decisions, and determine requirements. We did not independently determine the reliability of all of these sources. However, as stated above, we did assess the accuracy of the leadtime information by comparing data contained in the requirements system with data contained in item manager files. We performed our review between October 1993 and August 1994 in accordance with generally accepted government auditing standards. As you know, the head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on our recommendations to the House Committee on Government Operations and the Senate Committee on Governmental Affairs not later than 60 days after the date of this report. A written statement must also be submitted to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of the report. We are sending copies of this report to the Chairmen and Ranking Minority Members, Senate and House Committees on Appropriations and on Armed Services, Senate Committee on Governmental Affairs, and House Committee on Government Operations; the Secretaries of the Army, the Navy, and the Air Force; the Director, DLA; and the Director, Office of Management and Budget. Please contact me at (202) 512-5140 if you have any questions. The major contributors to this report are listed in appendix II. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated November 22, 1994. 1. We revised page 2 in accordance with DOD’s suggestions. 2. We revised page 2 as suggested by DOD. 3. We revised page 4 to address DOD’s concern. 4. We added references to DOD’s policy guidance on reducing leadtime, as set forth in DOD Regulation 4140.1-R, dated January 1993, on page 2. 5. We changed “inventory managers” to “inventory management activities” on pages 5 and 8, as suggested by DOD. Roger Tomlinson, Evaluator-in-Charge Bonnie Carter, Evaluator Rebecca Pierce, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the Department of Defense's (DOD) efforts to reduce acquisition leadtimes. GAO found that: (1) DOD has made only limited progress in reducing its acquisition leadtimes because the military services and the Defense Logistics Agency (DLA) have unevenly implemented the leadtime reduction initiatives; (2) no DOD agency has fully implemented the 1990 initiatives or the 1993 policy guidance for reducing leadtimes; (3) the Navy has been the most successful and the Air Force the least successful in reducing acquisition leadtimes; (4) additional leadtime reductions can be achieved by prompt implementation of DOD initiatives, periodic validation and updating of leadtime data, and purchasing spare parts directly from the original manufacturer; and (5) DOD could reduce costs by $1 billion over a 4-year period by reducing acquisition leadtimes. |
The United States and many of its trading partners have established laws to remedy the unfair trade practices of other countries and foreign companies that cause injury to domestic industries. U.S. law authorizes the imposition of AD/CV duties to remedy these unfair trade practices, namely dumping (i.e., sales at less than normal value) and foreign government subsidies. The U.S. AD/CV duty system is retrospective, in that importers pay estimated AD/CV duties at the time of importation, but the final amount of duties is not determined until later. By contrast, other major U.S. trading partners have AD/CV duty systems that, although different from one another, are fundamentally prospective in that AD/CV duties assessed at the time a product enters the country are essentially treated as final. Two key U.S. agencies are involved in assessing and collecting AD/CV duties owed. The Department of Commerce (Commerce) is responsible for calculating the appropriate AD/CV duty rate, which it issues in an AD/CV duty order. Commerce typically determines two types of AD/CV duty rates in the course of an initial AD/CV duty investigation on a product: a rate applicable to a product associated with several specific manufacturers and exporters, as well as an “all others” rate for all other manufacturers and exporters of the product who were not individually investigated. After the initial AD/CV duty investigation, Commerce can often conduct two subsequent types of review: administrative and new shipper. Administrative review: One year after the initial rate is established, Commerce can also conduct a review to determine the actual, rather than estimated, level of dumping or subsidization. At the conclusion of the administrative review, the final duty rate, also known as the liquidation rate, is established for the product. New shipper review: After an initial rate is established, a new shipper (i.e., a shipper who has not previously exported the product to the United States during the initial period of investigation and is not affiliated with any exporter who exported the subject merchandise) who is subject to the “all others” rate can request that Commerce conduct a review to establish the shipper’s own individual AD/CV duty rate. U.S. Customs and Border Protection (CBP), part of the Department of Homeland Security, is responsible for collecting the AD/CV duties. The initial AD/CV duty order issued by Commerce instructs CBP to collect cash deposits at the time of importation on the products subject to the order. Once Commerce establishes a final duty rate, it communicates the rate to CBP through liquidation instructions, and CBP instructs staff at each port of entry to assess final duties on all relevant products (technically called liquidating). This may result in providing importers— who are responsible for paying all duties, taxes, and fees on products brought into the United States—with a refund or sending an additional bill. CBP is also responsible for setting the formula for establishing the bond amounts that importers must pay. To ensure payment of unforeseen obligations to the government, all importers are required to post a security, usually a general obligation bond, when they import products into the United States. This bond is an insurance policy protecting the U.S. government against revenue loss if an importer defaults on its financial obligations. In general, the importer is required to obtain a bond equal to 10 percent of the amount the importer was assessed in duties, taxes, and fees over the preceding year (or $50,000, whichever is greater). In addition, importers purchasing from the new shipper can pay estimated AD/CV duties by providing a bond in lieu of paying cash to cover the duties—an option known as the new shipper bonding privilege. We previously reported that over $613 million in AD/CV duties from fiscal years 2001 through 2007 went uncollected, with the uncollected duties highly concentrated among a few industries, products, countries of origin, and importers. Recent CBP data indicate that uncollected duties from fiscal year 2001 to 2010 have grown to over $1 billion and are still highly concentrated. For example, according to CBP, five products from China account for 84 percent of uncollected duties. CBP, Congress, and Commerce have undertaken several initiatives to address the problem of uncollected AD/CV duties. However, these initiatives have not resolved the problems associated with collections. In response to the problems of collecting AD/CV duties, in July 2004, CBP announced a revision to bonds covering certain imports subject to these duties, significantly increasing the value of bonds required of importers. CBP’s goal was to increase protection for securing AD/CV duty revenue for certain imports when the final amount of duties owed exceeds the amount paid at the time of importation, without imposing an “excessive burden” on importers. In February 2005, CBP applied this revision to imports of shrimp from six countries as a test case, which covered a potential increase in the final AD duty rate of up to 85 percent from the initial rate. However, shrimp importers reported that the costs were substantial because they had to pay up front higher premiums and larger collateral requirements to obtain the bonds for the initial duties. These increased up-front costs can deter malfeasance by illegitimate importers by increasing the cost of importing merchandise subject to AD/CV duties, but may also impose costs on legitimate importers that pose little risk of failing to pay retrospective AD/CV duties. The enhanced bonding requirement was subject to domestic and World Trade Organization (WTO) litigation, and CBP decided to terminate the requirement in April 2009. Congress partially addressed the risk that CBP would not be able to collect AD/CV duties from new shippers by suspending the new shipper bonding privilege from August 2006 to July 2009. As a result, importers purchasing from new shippers were required to post a cash deposit for estimated AD/CV duties, like all other importers. This requirement eliminated the risk of uncollected AD/CV revenues when the final duty amounts were assessed at the cash deposit rate or less because CBP did not have to issue a bill for the bonded amount. Upon the July 2009 expiration of the requirement, the new shipper bonding privilege was reinstated. The Treasury stated in a 2008 report to Congress that the added risk associated with the bond compared with the cash deposit is low. Commerce has taken steps to improve the transmission of liquidation instructions to CBP, which should improve CBP’s ability to liquidate AD/CV duties in a timely manner. Once Commerce determines the final AD/CV duty, it publishes a notice in the Federal Register, and CBP has 6 months to complete the liquidation process. If CBP fails to complete the liquidation process within 6 months, an entry is “deemed liquidated” at the rate asserted by the importer at the time of entry. Once an entry has been deemed liquidated, CBP cannot attempt to collect any supplemental additional duties that might have been owed because of an increase in the AD/CV duty rate from initial to final. Commerce’s liquidation instructions are necessary for CBP to assess and collect the appropriate amount of AD/CV duties in a timely manner. However, we reported in 2008 that there were frequent delays in Commerce’s transmission of liquidation instructions to CBP, and that about 80 percent of the time, Commerce failed to send liquidation instructions within its self-imposed 15-day deadline. In addition, we found that Commerce’s liquidation instructions were sometimes unclear, thereby causing CBP to take extra time to obtain clarification. In December 2007, after we made Commerce officials aware of the untimely liquidation instructions, Commerce announced a plan for tracking timeliness, including a quarterly reporting requirement. In April 2011 Commerce officials told us that Commerce had deployed a system for tracking Commerce’s liquidation instructions. In addition, Commerce and CBP established a mechanism for CBP port personnel to submit questions to Commerce regarding liquidation issues. The House and Senate Appropriations Committees directed us to examine whether international agreements to which the United States is a party could be strengthened to improve the collection of AD/CV duties from importers with no attachable assets in the United States. We reported in 2008 that U.S. agency officials believed this would be both difficult and ineffective because of two key obstacles: Few countries are willing to enter into negotiations, and U.S. and foreign governments have a practice of not enforcing a revenue claim based upon the revenue laws of another country. In addition, agency officials stated that strengthening international agreements would not substantially improve the collection of AD/CV duties, given the retrospective nature of the AD/CV duty system and the high cost of litigation. There are two key components of the U.S. AD/CV duty system that have not been addressed but could improve the collection of AD/CV duties: the retrospective nature of the system and the new shipper review process. In addition, Commerce and CBP are contemplating changes to the bonding process. One key component of the U.S. AD/CV duty system is its unique retrospective nature, which creates risks of uncollected duties both because of time lags and rate changes. As discussed earlier, importers pay the estimated amount of AD/CV duties when products enter the United States, but the final amount of duties owed is not determined until later. In 2008, we found that the average time elapsed between entry of goods and liquidation was more than 3 years. The long time lag between the initial entry of a product and the final assessment of duties heightens the risk that the government will be unable to collect the full amount owed, as importers may disappear, cease business operations, or declare bankruptcy. The final amount owed under the retrospective system of the United States can also be substantially more than the original estimate, putting revenue at risk. We reported that, while final AD duty rates are lower than or the same as the estimated duty rates the vast majority of the time, in some cases final duty rates are significantly higher. On the basis of our analysis of more than 6 years of CBP data covering over 900,000 entries subject to AD duties, we found that duty rates went up 16 percent of the time, went down 24 percent of the time, and remained the same 60 percent of the time. When duty rates increased, the median increase was less than 4 percentage points. However, because of some large increases, the average rate increase was 62 percentage points, with some increases greater than 150 to 200 percentage points. The majority of uncollected duty bills over $500,000 are attributed to rate increases greater than 150 percentage points. In our 2008 report, we noted that the advantages and disadvantages of prospective and retrospective AD/CV duty systems differ and depend on specific design features. In prospective AD/CV duty systems, the amount of AD/CV duties paid by the importer at the time of importation is essentially treated as final. This eliminates the risk of being unable to collect AD/CV duties and creates certainty for importers. In a retrospective AD/CV duty system, however, the amount of AD/CV duties owed is not determined until well after the time of importation. This time lag can result in “bad actors,” those importers who intentionally avoid paying required duties, not being identified until they have been importing for a long time. Only after its collections efforts are unsuccessful does the government clearly know that duties owed by this importer are at serious risk for noncollection. Prospective AD/CV duty systems create a smaller burden for customs officials because the full and final amount of AD/CV duties is assessed at the time of importation, whereas, according to CBP, the retrospective AD/CV duty system of the United States places a unique and significant burden on CBP’s resources. Depending on the design of the prospective AD/CV duty systems, the amount of duties assessed is based on dumping or subsidization that occurred in a previous period, and therefore may not equal the amount of actual dumping or subsidization, whereas under a retrospective AD/CV duty system, the amount of duties assessed reflects the actual amount of dumping by the exporter for the period of review. However, in practice, a substantial amount of retrospective AD/CV duty bills are not collected. In response to a recommendation in our 2008 report, Commerce reported to Congress in 2010 on the advantages and disadvantages of retrospective and prospective systems. While the Commerce report cites a variety of strengths and weaknesses for both systems, it states that retroactive increases in AD/CV duties are particularly harmful for small businesses such as shrimp and seafood importers. Under a retrospective system, the Commerce report notes, such small U.S. importers potentially face years of uncertainty over duty liability that can hinder their ability to make informed business decisions, plan investments, and create jobs. Another component of the AD/CV duty collection system that has not been resolved is the new shipper review process. This process allows new manufacturers or exporters to petition for their own separate AD/CV duty rate. However, U.S. law does not specify a minimum amount of exports or number of transactions that a company must make to be eligible for a new shipper review, and according to Commerce officials, they do not have the legislative authority to create any such requirement. As a result, a shipper can be assigned an individual duty rate based on a minimal amount of exports—as little as one shipment, according to Commerce—and can intentionally set a high price for this small amount of initial exports. This creates the possibility that companies may be able to get a low (or 0 percent) initial duty rate, which will subsequently rise when the exporter lowers its price. This creates additional risk by putting the government in the position of having to collect additional duties in the future rather than at the time of importation. Importers that purchased goods from companies undergoing a new shipper review are responsible for approximately 40 percent of uncollected AD/CV duties. Commerce and CBP have proposed additional changes to the bonding process to try to reduce the risk of uncollected AD/CV duties. In April 2011, Commerce proposed a rule that would eliminate the bond that all shippers post when entering products under an AD/CV investigation and require a cash deposit instead. A key reason for the change is that importers bear full responsibility for future duties, according to Commerce. Separately, in May 2011, CBP’s Commissioner of International Trade stated in a Senate hearing that CBP is developing internal guidance to require that importers at risk of evasion take out onetime bonds that cover at least the full value of the shipment (single-transaction bonds). Currently, shippers typically take out a “continuous bond” that covers all import transactions over the course of a year, and is calculated at 10 percent of the prior year’s duties (or $50,000, whichever is greater). GAO has not reviewed these proposals or assessed their potential effect on the collection of additional AD/CV duties . The existence of a substantial amount of uncollected AD/CV duties undermines the effectiveness of the U.S. government’s efforts to remedy unfair foreign trade practices for U.S. industry. While Congress and federal agencies have taken actions to address the problem of uncollected duties, these initiatives have met with little success. Some additional options exist that Congress could pursue to further protect government revenue. In particular, Congress could eliminate the retrospective component of the U.S. AD/CV duty system and consider the variety of alternative prospective systems available. Congress could also make adjustments to specific aspects of the U.S. AD/CV duty system without altering its retrospective nature, such as by providing Commerce the discretion to require companies applying for a new shipper review to have a minimum amount or value of imports before establishing an individual AD/CV duty rate. However, any effort to improve the U.S. AD/CV duty system should consider the additional costs placed on legitimate importers while attempting to address the issue of illegitimate importers. We continue to respond to congressional interest in this issue, and have recently begun a review of the evasion of trade duty laws, in response to a request from the Subcommittee on International Trade, Customs, and Global Competitiveness, Senate Committee on Finance. Chairman Landrieu, Ranking Member Coats, this completes my prepared statement. I would be happy to respond to any questions you or other members of the subcommittee may have at this time. For further information about this statement, please contact Loren Yager at (202) 512-4347 or yagerl@gao.gov. Individuals who made key contributions to this statement include Christine Broderick (Assistant Director), Jason Bair, Ken Bombara, Aniruddha Dasgupta, Grace Lui, Diahanna Post, and Julia Roberts. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Since fiscal year 2001, the federal government has been unable to collect over $1 billion in antidumping (AD) and countervailing (CV) duties imposed to remedy injurious, unfair foreign trade practices. These include AD duties imposed on products exported to the United States at unfairly low prices (i.e., dumped) and CV duties on products exported to the United States that were subsidized by foreign governments. These uncollected duties show that the U.S. government has not fully remedied the unfair trade practices for U.S. industry and has lost out on a substantial amount of duty revenue to the U.S. Treasury. This statement summarizes key findings from prior GAO reports on (1) past initiatives to improve AD/CV duty collection and (2) additional options for improving AD/CV duty collection. U.S. Customs and Border Protection (CBP), Congress, and Commerce have undertaken several initiatives to address the problem of uncollected AD/CV duties, but these initiatives have not resolved the problems associated with collections. Some of these initiatives include the following: (1) Temporary adjustment of standard bond-setting formula. Importers generally provide a general bond to secure the payment of all types of duties, but CBP determined in 2004 that the amount of this bond inadequately protected AD/CV duty revenue. CBP took steps to address this by revising its standard bond-setting formula and tested it on one product (shrimp) to increase protection for AD/CV duty revenue when the final amount of duties owed exceeds the amount paid at the time of importation. The enhanced bonding requirement was subject to domestic and World Trade Organization litigation, and CBP decided to terminate the requirement in 2009. (2) Temporary suspension of new shipper bonding privilege. Importers purchasing from "new shippers"--shippers who have not previously exported products subject to AD/CV duties--are allowed to provide a bond in lieu of cash payment to cover the initial AD/CV duties assessed, which is known as the new shipper bonding privilege. Congress partially addressed the risk that CBP would not be able to collect initial AD/CV duties from such importers by suspending the new shipper bonding privilege for 3 years and requiring cash deposits for initial AD/CV duties, but the privilege was reinstated in July 2009. The Department of the Treasury stated, however, that the added risk associated with the bond compared with the cash deposit is low. Additional options exist for improving the collection of AD/CV duties. First, the retrospective nature of the U.S. system could be revised. Under the existing U.S. system, importers pay the estimated amount of AD/CV duties when products enter the United States, but the final amount of duties owed is not determined until later, a process that can take more than 3 years on average. This creates a risk that the importer may disappear, cease business operations, or declare bankruptcy before the government can collect the full amount owed. Other major U.S. trading partners have AD/CV duty systems that, while different from one another, treat as final the AD/CV duties assessed at the time a product enters the country. Second, Congress could revise the level of exports required for exporters applying for new shipper status. Under U.S. law, new shippers to the United States can petition for their own separate AD/CV duty rate. According to Commerce, a shipper can be assigned an individual duty rate based on as little as one shipment, intentionally set at a high price, resulting in a low or 0 percent duty rate. This creates additional risk by putting the government in the position of having to collect additional duties in the future rather than at the time of importation. |
The State Department conducts activities designed to promote and protect U.S. interests overseas. To support these activities, State maintains a headquarters with regional and functional bureaus and 252 overseas posts. The State Department received slightly less than $2.7 billion for the administration of foreign affairs in both fiscal years 1995 and 1996, with the bulk of these resources allocated to salaries, infrastructure, and operating expenses. State faces a widening gap between available budget resources and the costs of maintaining existing activities. The State Department is the central agency for coordinating and implementing U.S. foreign policy in support of U.S. interests. State provides leadership to help bring peace and stability to areas such as Bosnia and the Middle East and carries out a variety of activities to promote these interests, including negotiating and overseeing over 14,000 treaties and agreements in force since 1946, including 24 treaties and 338 agreements concluded in 1994; analyzing overseas events to obtain information critical to U.S. preparing over 130 congressionally mandated reports covering such diverse subjects as the abuse of human rights and foreign trade; representing the United States at 700 international conferences annually; providing consular services to Americans overseas and issuing over 5 million passports and 8 million visas annually; and providing administrative support to about 35 federal departments and independent agencies with staff overseas. State’s headquarters in Washington, D.C., includes geographic bureaus that are organized along regional lines (such as the Bureau of East Asian and Pacific Affairs) and bureaus that are organized along functional lines (such as the Bureau of Political-Military Affairs). Figure 1.1 shows the basic organizational structure of the Department. The Under Secretary for Political Affairs oversees State’s six geographic bureaus and the Bureau of International Organizations. With 1,143 headquarters staff, Political Affairs is the Washington focal point for the development of policy recommendations, for coordination with other departments and agencies, and for transmission of guidance to ambassadors in the field. The geographic and International Organizations bureaus guide, coordinate, and supervise nearly all of the State Department’s activities overseas, including the operation of 163 embassies, 64 consulates general, 13 consulates, 8 missions to international organizations, 2 branch offices, 1 liaison office, and 1 interests section. The functional bureaus generally manage and coordinate specific issues and activities. The Bureau of Economic and Business Affairs, under the jurisdiction of the Under Secretary for Economic, Business, and Agricultural Affairs, is responsible for integrating U.S. economic interests with U.S. foreign policy in such areas as international energy, trade, and international civil aviation. In addition, State economic officers support U.S. foreign policy initiatives, including devising, negotiating, and implementing strategies and agreements to advance U.S. goals such as Russia’s transition to democracy and Bosnia’s reconstruction. The Under Secretary also oversees the Office of the Coordinator for Business Affairs, which was created in 1993 to facilitate U.S. businesses’ access to markets abroad. The Under Secretary for International Security Affairs coordinates national security functions pursuant to over 20 provisions of law. The Under Secretary manages the Bureau of Political-Military Affairs, which provides guidance, coordinates policy formulation, and participates in all major negotiations involving the nonproliferation of weapons of mass destruction and missile technology, nuclear and conventional arms control, defense relations and security assistance, and export controls. The Under Secretary for Global Affairs facilitates the implementation of U.S. foreign policy on 11 issues grouped under 4 bureaus: the Bureau of Democracy, Human Rights, and Labor; the Bureau of International Narcotics and Law Enforcement Affairs; the Bureau of Oceans and International Environmental and Scientific Affairs; and the Bureau of Population, Refugees, and Migration. Global Affairs administers over $800 million for narcotics control, refugee, and other programs and prepares reports such as the annual human rights report to the Congress. The Bureau of Consular Affairs administers and enforces immigration and nationality laws in issuing passports, visas, and related services and provides for the protection and welfare of American citizens and interests abroad. The Bureau also manages the Department’s border security program through which State is attempting to improve visa and passport functions. In fiscal year 1995, Consular Affairs issued 5.7 million passports, processed 7.8 million nonimmigrant visas and 604,000 immigrant visas, and provided over 1.1 million special services. The Under Secretary for Management directs all budgetary, support, and personnel policies of the Department. The Under Secretary’s principal function is to reconcile resources, both fiscal and personnel, with policy requirements. This Under Secretary also coordinates the activities of other bureaus, including the bureaus of Consular Affairs, Personnel, Administration, and Finance and Management Policy. The State Department was appropriated $2.695 billion for fiscal year 1995 and $2.671 billion for fiscal year 1996 for the administration of foreign affairs. State spends nearly 92 percent of its funding for fixed costs: personnel, operating supplies, utilities, and essential contracts. The remaining 8 percent is for mission-essential travel, the replacement of worn-out equipment, and infrastructure projects. Table 1.1 shows the allocation of State’s fiscal year 1995 appropriations by major account category. Table 1.2 shows how these funds were actually spent, based on our analysis of State Department data. This analysis shows that overseas posts cost about $1.9 billion (the combined funding for overseas foreign policy, overseas consular functions, overseas support provided by both geographic and management bureaus, and security and maintenance of U.S. missions). State support functions cost about $1.8 billion (the combined costs of domestic support, overseas support provided by both geographic and management bureaus, and security and maintenance of U.S. missions). Support functions include, among others, information systems, housing, telecommunication, security, personnel, finance, training, and medical services. Because of inflation and cost increases overseas, we estimate that maintaining current functions and personnel would cost $584 million more in 2000 than in 1995, a 22-percent increase. But, it is likely that the Department of State will face budget cutbacks over the next several years. If total discretionary spending is held to the levels envisioned in the congressional budget resolution for fiscal year 1997, spending will fall by almost 6 percent between 1995 and 2002. It will be difficult to exempt State from bearing a share of this planned reduction. Moreover, larger reductions in State’s funding have been proposed by the Office of Management and Budget (OMB) and the Congress. For fiscal year 1996, State’s budget is $2.671 billion for the administration of foreign affairs—$87 million less than requested. In July 1995, OMB proposed reducing funding for the administration of foreign affairs to $2.5 billion by 2000—a 7-percent decline from fiscal year 1995. When inflation is factored in, this represents a $770-million reduction in State’s purchasing power. Under the terms of the 7-year concurrent budget resolution passed by the Congress in June 1995, funding levels for the administration of foreign affairs could be even less than OMB projections. If the administration of foreign affairs were to take a proportional share of the proposed reductions for international affairs (the 150 budget function), State would receive $1.4 billion less in fiscal year 2000 than the amount required to sustain current activity levels. This represents a greater than 44-percent reduction in real terms from fiscal year 1995 levels. This analysis assumes that the administration of foreign affairs would receive the same percentage reductions as the rest of the 150 function, which may not be the case. To maintain the current—fiscal year 1995—level of services, we estimate that State would need $584 million more in appropriations in fiscal year 2000 than it received in fiscal year 1995—a 22-percent increase. We calculated this amount by applying a 4-percent annual inflation rate to the fiscal year 1995 funding. Figure 1.2 illustrates the widening difference between the funding needed to sustain State’s current level of services and funding provided under the OMB and congressional scenarios. The magnitude of the likely difference between available resources and the funding needed to maintain current operations dictates the need for a fundamental change in the management and structure of the foreign affairs apparatus. In the remainder of this report, we discuss State’s own reform efforts and the importance of developing an effective strategy to guide fundamental change (ch. 2) and present various options to realize cost reductions—streamlining State functions (ch. 3), restructuring State’s overseas presence (ch. 4), and reducing support costs (ch. 5). In response to a request from the Chairman, House Committee on the Budget, we reviewed State’s reform and cost-cutting initiatives and identified options that would enable State to adjust to reduced budgets. We did not make judgments on the relative value of State’s functions and activities or the level of resources that are required. In Washington, D.C., we interviewed officials and collected data in the bureaus responsible for political, economic and business, international security, global, consular, and support issues and functions. We also conducted work at agencies that perform related functions, including offices at the Departments of Agriculture, Commerce, Defense, Justice, Labor, Transportation, the Treasury, and other agencies, including the U.S. Agency for International Development (USAID), the U.S. Information Agency, the Arms Control and Disarmament Agency (ACDA), the Peace Corps, the Environmental Protection Agency, the U.S. Export-Import Bank, and the Office of the U.S. Trade Representative (USTR). Although our work describes State’s relationships with these agencies, we did not evaluate State’s effectiveness relative to these agencies. In addition, we met with various individuals in the private sector, including representatives from the U.S. Chamber of Commerce, the Executive Council on Foreign Diplomacy, the Kenan Institute of Private Enterprise, and the National League of Cities. We interviewed management officials and conducted work at offices and private firms undergoing reforms and providing support. We also visited Canada to discuss its overseas diplomatic practices and cost concerns. To understand the work of the State Department overseas, we visited posts in six countries: Belarus, Brazil, Malaysia, The Netherlands, Paraguay, and Senegal. These posts were selected based on their varying missions, sizes, and geographical locations. We interviewed officials and reviewed the work of each section to identify primary activities, focusing on the months of September and October 1995, and discussed management and funding issues with post managers. We obtained formal comments on the draft of this report from the State Department. They are discussed at the end of chapters 2, 3, 4, and 5, and are presented in their entirety in appendix II, along with our evaluation. To verify data, we provided a copy of a draft of this report to the Departments of Agriculture, Commerce, Labor, the Treasury, and Transportation; the Environmental Protection Agency; USAID; ACDA; and USTR. We have incorporated their comments and suggested revisions to the text where appropriate. We did our review between June 1995 and June 1996 in accordance with generally accepted government auditing standards. The Department of State has not developed a comprehensive strategy to restructure its operations to adjust to potential funding reductions. State officials have not fully accepted that State may have to substantially reduce its costs. It notes that the President’s request for fiscal year 1997 would require some modest downsizing, but not to the level required by other proposed funding levels. Believing that substantial funding reductions would severely jeopardize its ability to conduct foreign policy and achieve U.S. goals, State has decided not to plan how it would accommodate proposed budget reductions. State’s various reform initiatives over the past 3 years—including the 1994 Strategic Management Initiative and responses to National Performance Review recommendations—have not resulted in overall plans to implement the substantial changes that may be necessary. State’s reform initiatives ultimately focused on short-term, narrowly focused actions that have had little impact on the structure of the foreign affairs apparatus and did not achieve significant cost reductions. Adjusting to proposed budget reductions will require more substantial changes than those that have occurred to date. Our assessments of other government and private sector organizations show that planning is essential for effective downsizing and restructuring. Without such a strategy, State’s ability to effectively carry out its broad mission of promoting and protecting U.S. interests overseas will be jeopardized in light of the severe budget gap it is facing. According to State, since 1993 it has cut the number of deputy assistant secretaries by 25 percent; reduced the general workforce by 2,200 positions—an 8.5-percent reduction; will have closed 14 small overseas posts during fiscal year 1996; reduced the cost of security programs by 15 percent by applying risk management principles; and reduced travel, contracts, and equipment expenses. These actions accommodated the flat budgets between 1993 and 1995 and responded to a 1993 presidential memorandum to reduce positions by 12 percent between fiscal year 1994 and 1999. These actions, though positive, are not sufficient to enable State to cope with proposed reduced funding. Moreover, State has been unwilling or unable to make major changes suggested under its Strategic Management Initiative and the National Performance Review—changes that would result in a more efficient operation. The Secretary of State established the Strategic Management Initiative in 1994 to set the Department’s future course and eliminate unnecessary or marginal functions and internal duplication. He tasked the Department with formulating and implementing a plan of action to focus the organization around its core mission. The Secretary asked his staff to consider restructuring the Department to carry out its mission with a substantial reduction in resources, and he recognized that, to do so, a comprehensive review of functions was needed. In January 1995, State announced that the initiative would highlight the highest priority functions and products and identify low priority, redundant, duplicative, and less valued work, which would be discontinued. The Secretary told State employees that “we must remake ourselves from the bottom up” and that State faces “a painful process culminating in hard choices.” This initiative represented a major change for State, which in the past tended to implement across-the-board budget reductions when necessary rather than deciding what was more or less important. The initiative’s first phase resulted in a series of reports analyzing issues related to workload reduction, constituents’ views on State’s products and services, eminent Americans’ views on State’s future role, reengineering of the Foreign Service transfer process, communications, and the best practices of organizations that have restructured. The teams provided these reports, along with recommendations for change, to the Secretary in January and early February 1995. According to the initiative’s coordinator, the Secretary decided in February 1995 that it was not a good time to propose fundamental changes to State’s mission, organizational structure, and processes and that the initiative should focus on recommendations that would not involve major changes to operations. The Secretary was concerned that proposed legislation to consolidate foreign affairs agencies could severely affect the organization. Under the second phase, the emphasis of the initiative has been to achieve operating efficiencies. The Secretary also made improving the quality of life for State personnel a priority. The Secretary stipulated that any recommendations could not change the basic structure of the Department and must be implemented in a short period of time. In March and April 1995, seven teams developed 300 recommendations based on the assumption that State’s operating budget would not change. Forty-five recommendations were presented to the Secretary; in May 1995, he approved 33. As of February 1996, State had implemented or was implementing 30 of the 33 recommendations, including streamlining processing for arms sales and export licensing requests, simplifying the Department’s travel order and vouchering system, and reducing the number of required reports from overseas posts. The 15 recommendations that had not been implemented include closing some diplomatic security resident agent offices, establishing an overseas staffing board, using an interagency advisory board to strengthen overseas staffing controls, and setting priorities for intelligence gathering and reporting. If these recommendations were implemented, costs could be significantly reduced. In April 1995, the initiative’s team leaders initially identified $61 million in potential domestic cost reductions, an additional $35 million in out-year cost reductions, and $17 million in annual cost reductions from the closure of overseas posts. Furthermore, they estimated that by terminating some support functions, $33 million of the Management Bureau’s domestic support funds could be reallocated to other needs. (Actions that State is considering to streamline support functions are discussed in ch. 5.) Although the Department did not attempt to track the actual cost reductions achieved, the initiative coordinator told us in October 1995 that few of the approved recommendations will reduce costs. Furthermore, he indicated that cost reduction had not been a primary goal of the Strategic Management Initiative, and therefore the Department had not established cost reduction targets. State’s reports show that Strategic Management Initiative efforts have resulted in the elimination of as many as 130 positions. (We were unable to verify reductions in positions claimed in State’s reports.) Beyond the cost reductions from eliminating those positions, State estimated that it would reduce costs by about $2.5 million in fiscal year 1996 and about $9.3 million annually thereafter by closing 13 overseas posts. However, a major cost at these posts—salaries for U.S. staff—is not included in the estimates because the positions would be moved to other locations, not eliminated. State has taken the position that elimination of these posts is part of State’s overall plan to reduce staff. However, the above State Department cost reduction estimates do not reflect cost reductions from staff cuts. The initiative’s primary goals (1) to highlight priority functions and products and (2) to identify and stop low-priority, redundant work have, for the most part, not been realized. To date, no comprehensive review of State’s functions and processes has been conducted. Although State has made some minor reductions in duplication among State offices and in the number of its reports, no functions have been eliminated. The National Performance Review, established in March 1993 to make government work better and cost less, recommended in September 1993 that State implement 14 action items to reduce costs by (1) cutting operating costs at overseas posts, (2) improving collection of receivables, (3) relocating regional administrative management centers, and (4) expanding management authority of chiefs of mission (generally ambassadors). The National Performance Review estimated that implementing these actions could yield about $68 million in cost reductions between fiscal year 1994 and 1999. As of January 1996, State had partially implemented only 2 of the 14 action items. State deactivated Marine security guard detachments at a net total of 11 overseas posts where classified operations did not warrant 24-hour cleared American presence. This will achieve an estimated annual cost reduction of $1.2 million. State also established an accounting and debt collection procedure for all overseas medical expenses. State estimated that this action resulted in collections of over $1 million in fiscal year 1994. The remaining 12 action items are either under study, require legislation for implementation, or are not yet completed. To cut support costs, State was relocating a regional administrative management center from Mexico City to Charleston, South Carolina, and has discussed relocating functions handled by centers in Bangkok, Thailand and Paris, France to reduce annual costs by as much as $3.5 million 5 years after completing the moves. State does not have a plan for implementing these relocations. The experiences of private and public sector organizations show that planning is essential for effective downsizing and restructuring. For example, our reports on previous reviews show that (1) downsizing needs to be based on a clear determination of an organization’s mission and resource requirements and (2) personnel reductions need to be taken with a view toward retaining a viable workforce. Without identifying core missions, functions, and processes, organizations acknowledged that they had cut needed employees, suffered skill imbalances, and were often forced to rehire or replace employees who had been separated. At one company we reviewed, officials said that early cuts were not sufficiently tied to a larger strategy and only exacerbated the company’s problems because work did not go away simply because staff positions were cut. Eventually, this company analyzed the value of each functional area in the organization. Although State’s precise funding for the administration of foreign affairs has not been agreed upon, bipartisan efforts to balance the budget and downsize the government make it likely that State will receive less resources for the foreseeable future. To successfully cope with the challenge of directing U.S. foreign policy in the post-Cold War era during a period of declining real resources, State must be prepared to clearly articulate its key missions, identify the core functions linked to those missions, prioritize those activities that directly support missions and functions, and link potential resource levels to these activities. Only then can the merits of various options for achieving significant cost reductions be effectively weighed, the need for administrative or legislative changes necessary to implement those options be identified, and difficult choices be made. We are not taking a position on the level of resources needed by the State Department for the administration of foreign affairs. However, given the likely decline in discretionary spending throughout the federal government and the various proposals for reductions in State’s budget, State needs to plan for how it can become a smaller, more efficient, and less expensive organization. We recommend that the Secretary of State develop a downsizing strategy that (1) identifies critical and noncritical functions and their costs; (2) specifies the changes that would be necessary to adjust to potential funding levels; and (3) identifies what legislative actions or modifications to interagency agreements, if any, would be required to implement the changes. At a minimum, State should have a strategy that is based on out-year funding guidance from OMB. (Such a strategy would allow for consideration of other funding proposals and could be adjusted to accommodate actual appropriation amounts.) In commenting on a draft of this report, State disagreed with our conclusions and recommendation regarding the need to develop a strategy for adjusting to potential budget reductions. State also commented that the report did not fully recognize some of the positive actions State has taken to streamline its operations. For example, State said that its Overseas Staffing Board met in June 1996 to begin implementation of an overseas staffing model. State also indicated that it had progressed in setting priorities for intelligence gathering and reporting by expanding State’s representation on the intelligence community and interagency boards. State agreed that strategic planning for downsizing is important, and believed that it had planned for what it described as “future reasonable budget cuts.” But, State stressed that it does not accept and will not plan for proposed funding reductions that could approach 44 percent. It believes reductions of this magnitude would pose unacceptable risks and cause irreparable damage to America’s national interests. Given the funding situation, we believe that State needs to seriously consider actions for adjusting to the potential funding scenarios. Furthermore, we believe that developing a downsizing strategy would enable State to focus available resources on its most critical functions and activities. Absent a downsizing strategy based on a comprehensive review of its functions and processes, the Department cannot demonstrate clearly how funding reductions will hamper U.S. interests or be in a position to protect critical functions. Reducing or eliminating State’s role in some foreign policy and consular functions could lead to cost reductions at headquarters and overseas posts, where State has a broad mandate to represent and protect U.S. interests and provide services to a wide range of customers, including the Congress, other U.S. agencies, the private sector, and the American public. Over $500 million of the budget is specifically focused on implementing foreign policy, and about $270 million is devoted to consular functions. Options for State to consider include (1) reassessing the extent of its involvement in functions where State shares substantial, overlapping responsibility with other agencies and (2) cutting back on some specific activities and recouping the costs of some products and services. State needs to face the challenge of identifying its core functions and their costs and making choices about the level of resource investments that are both appropriate and affordable to sustain those functions. More so than ever, many government issues, policies, and activities have an international dimension, and State has taken a role in most cases. In consultation with other government agencies, the State Department develops, coordinates, and implements U.S. foreign policy and activities. State’s various functional offices and bureaus focus on key foreign policy objectives and programs that over time have evolved to cover a wide range of issues. The Department evolved as a bulwark against the threat of communism, and U.S. foreign policy was directed toward its containment. With the end of the Cold War, State has increased its role in some areas. For instance, with the consent of the Congress, State created a new Under Secretary for Global Affairs to direct efforts toward promoting human rights and workers’ rights, supporting emerging democracies, protecting and improving the global environment, controlling population growth, assisting migration and refugees, and controlling international narcotics and crime. Some of these issues were peripheral during the Cold War. To adjust to reduced budgets, State must scrutinize its functions and reassess its involvement in those functions. State also must consider reducing some of its activities wherever feasible and recouping costs of the wide array of products and services it provides to numerous customers. State’s functional bureaus share responsibility with multiple U.S. agencies on various overlapping policy issues. We identified nearly 30 agencies and offices involved in trade policy and export promotion, about 35 engaged in global programs, and over 20 involved in international security functions. These agencies look to State for overall foreign policy guidance, and State often relies on them for program funds and technical expertise. According to agency officials, the value State adds to these functions is its language expertise and negotiating skills as well as its knowledge of foreign governments, access to and contacts with host government officials, and understanding of the foreign political and economic environment. In the sections that follow, we describe selected State functions. The extent of State’s involvement in these functions varies from one of a supporting role to that of a lead player. The involvement of many agencies in similar or related functions does not mean the agencies unnecessarily duplicate activities, but it does suggest the potential for consolidation or transfer of some of State’s duties. However, choosing which areas to cut is complicated because State’s functions and activities can be broadly linked to foreign policy objectives, functions are specifically required by statute, or interagency processes require State’s participation. Furthermore, decisions about resource trade-offs are difficult because State’s financial management system does not provide accurate cost data to show the amount State spends by function or activity or the level of resources applied to specific policy objectives. In the area of international trade, the U.S. government formulates, coordinates, and implements U.S. economic and trade policy. State helps U.S. government efforts to (1) negotiate international treaties and trade agreements; (2) enforce U.S. trade laws; (3) promote U.S. exports; and (4) collect, analyze, and report information on economic issues. While USTR and the Department of Commerce are at the center of federal trade activity, the Departments of State, the Treasury, Agriculture, and Labor are also involved in trade policy. Overall about 20 other U.S. agencies and offices have varying responsibilities under trade and economic policy. USTR shepherds the formulation of U.S. trade policy through an interagency process, and the Secretary of Commerce chairs the Trade Promotion Coordinating Committee, an interagency group that is required by statute to develop a governmentwide strategy for rationalizing the federal government’s export programs. In some cases, agencies’ responsibilities overlap in the area of international trade and economic policy functions. For example, although USTR plays a lead role in developing and coordinating international trade policy, investigating some alleged unfair trade practices, and enforcing trade agreements, it relies on an interagency trade policy group to assist with these responsibilities. State participates actively in this group, sometimes as a lead player, and USTR relies on State to lead some negotiations and to execute policy. State economic officers at overseas posts obtain information from and convey U.S. positions to foreign governments. In fiscal year 1995, USTR had a staff of 163 employees and a budget of about $21.4 million. The Department of Commerce’s U.S. and Foreign Commercial Service and the Department of Agriculture’s Foreign Agricultural Service promote exports. The Commercial Service has 824 staff, including 213 foreign commercial officers at 133 overseas offices in 69 countries, and a fiscal year 1995 budget of $96.1 million. With these resources, it helps individual U.S. companies take advantage of specific export opportunities by providing foreign market research, trade finance-related information, and trade facilitation services. The Foreign Agricultural Service promotes exports of U.S. food and agricultural products and administers programs to enhance the competitiveness of U.S. agricultural exporters. As of September 1995, the Foreign Agricultural Service had 1,097 staff, including 265 at 75 overseas locations, and a fiscal year 1995 budget authority of $118 million. The role of State economic officers is to (1) persuade foreign governments to open markets for U.S. companies by seeking lower tariffs and eliminating nontariff barriers; (2) seek to improve protection for intellectual property rights; (3) monitor implementation of trade agreements; and (4) participate in negotiations of economic agreements—a role it shares with USTR and the Departments of Commerce, the Treasury, and Agriculture. In over 100 smaller countries with limited markets, and where the U.S. and Foreign Commercial Service and the Foreign Agricultural Service are not represented, State economic officers provide commercial assistance to U.S. businesses and perform their traditional economic duties. They identify export opportunities, provide businesses with contacts and advice on host country business practices and economic conditions, and sponsor trade events. U.S. state governments and many city governments also promote trade abroad. Some have offices overseas, and the U.S. Chamber of Commerce has over 70 affiliated chambers in 65 countries. While these entities supplement the U.S. government’s export promotion efforts, they are not generally viewed as a substitute for continued U.S. government involvement in this area. Officials we interviewed emphasized that nonfederal entities cannot officially represent the U.S. government, nor can they provide market intelligence worldwide because collecting such information would be cost prohibitive. Thus, the State Department plays a critical role in comprehensively representing U.S. interests around the world and can, if necessary, challenge foreign governments’ unfair or unethical practices to level the odds for U.S. businesses. On transportation, international telecommunications, and international energy issues, State officials cite statutes and executive orders as the basis for State’s involvement. Other agencies generally provide the technical expertise, while State handles key negotiations and integrates agencies’ actions with overall U.S. foreign policy. For example, State and the Department of Transportation are engaged in activities important to U.S. economic policy, including those of the Office of International Aviation within the Office of the Secretary and the U.S. Maritime Administration. In fiscal year 1995, the International Aviation had about 40 positions and a budget of $3.3 million, while the Maritime Administration devoted about 7 positions and $612,100 of its budget to international activities. For International Aviation, Transportation and State share responsibilities for formulating, coordinating, and executing the U.S. international aviation policy. These responsibilities include negotiating and overseeing 107 bilateral air transport agreements that establish air service rights. In addition, some countries’ carriers serve the United States on the basis of comity and reciprocity without any written agreement. Transportation provides technical expertise and does the substantive work in negotiating agreements, and State chairs aviation negotiations. On international shipping policy, however, the U.S. Maritime Administration takes the lead and chairs U.S. delegations, negotiating with the five countries that have bilateral maritime agreements with the United States. State is an active participant in these negotiations and is consulted on matters affecting U.S. foreign relations and economic interests. To carry out its transportation responsibilities, the Bureau of Economic and Business Affairs has 7 officers involved in negotiating aviation agreements and 14 officers working on transportation policy, including maritime negotiations. In the area of international security affairs, State focuses on arms control, the nonproliferation of weapons of mass destruction, export controls, and regional security. In some cases, State shares its expertise on certain issues, while other agencies complement State’s contribution. In other cases, the uniqueness of each agency’s contribution is unclear. For example, one area of overlap is in the review of export license applications. State’s Office of Defense Trade Controls, under the Political-Military Bureau, works in partnership with the Defense Department on license applications for arms exports. The Office also confers with the Commerce Department on license applications for exports of sensitive dual-use items and with the Department of Energy for exports of nuclear-related material. Both State and ACDA are involved in dual-use and arms export issues. In some instances, multiple agencies review the same applications to provide their perspectives and expertise. In the area of arms control, studies issued by the Office of Inspector General for ACDA, the Office of Inspector General for State, and the National Security Council suggest that duplication between State and other agencies could be eliminated. In August 1995, the Office of Inspector General for ACDA (which also serves as the Office of Inspector General for the Department of State) reported that duplication between ACDA and State’s Political-Military Bureau “promotes inefficient use of resources by both organizations and accentuates turf consciousness, dissipating energies and damaging morale.” Without specifying where duplication exists, the report recommended that ACDA, with State, reassess the division of labor and make more extensive use of teams to accomplish tasks of mutual interest. A State official acknowledged that the Political-Military Bureau’s Office of Strategic Policy and Negotiations and ACDA probably have some duplication or overlap in functions on which ACDA is the lead agency. He said, however, that duplication of this type does not mean that both organizations are doing the same work. Rather, State’s office represents the Department’s interests and articulates State’s positions on delegations or interagency groups. Furthermore, the official said such duplication was necessary because each agency has different goals, perspectives, and agendas. The official added that without State’s involvement, needed political and diplomatic input would be lost, and overall policy and diplomacy efforts could become imbalanced. State noted that it is seeking to identify and reduce unnecessary duplication between State and ACDA in order to reduce costs while maintaining a productive overlap. Although State represents U.S. interests and formulates policy on global issues, USAID plays a key role in implementing programs concerning democratization ($432 million), population ($568 million), and the environment ($799 million), spending a total of about $1.8 billion in fiscal year 1995. USAID works with State to develop conference agendas and provides technical expertise, but State leads delegations to international conferences and negotiates treaties on these issues. On environmental issues, State often relies on the Environmental Protection Agency for policy and technical expertise. However, State plays a key role in representing U.S. interests in international organizations’ activities relating to environment, science and technology, and health issues. In addition, State clears bilateral agreements negotiated by the Environmental Protection Agency. In fiscal year 1995, the Agency devoted about 160 work years, including 15 staff in overseas assignments, and a budget of $44 million to its international activities. These activities included protecting U.S. citizens and natural resources from transboundary and global environmental threats, leading U.S. government efforts to implement the Western Hemisphere Summit Program on Partnership for Pollution Prevention. The Environmental Protection Agency also works with other agencies and the private sector to match pressing environmental problems overseas with U.S. suppliers of environmental technologies. Labor and workers’ rights issues are addressed within State’s bureaus of Democracy, Human Rights, and Labor; Economic and Business Affairs; Population, Refugees, and Migration; and International Organizations. In addition, State’s Office of International Labor Affairs, which has a headquarters budget of $381,000, maintains about 45 attachés overseas to gather detailed information on workers’ rights outside the United States and prepare congressionally required reports on workers’ rights. Moreover, the Department of Labor, which is the lead agency for formulating international economic, trade, and immigration policies affecting U.S. workers, has a fiscal year 1995 budget of $12.2 million and about 90 staff to deal with its international responsibilities. It participates in interagency committees and international conferences and meetings and serves as the lead U.S. representative in multilateral forums on labor. By eliminating the positions of 6 headquarters staff and the 45 labor attachés overseas, State could reduce costs by about $7.4 million annually. According to several officials at overseas posts, labor issues could be adequately covered by political and/or economic officers. In addition, several State bureaus monitor labor issues. State has proposed abolishing or lowering the rank of some labor attaché positions in the past but has encountered resistance from the Department of Labor and organized labor. Faced with a reduced budget environment, State must make choices about which areas to cut while considering the interrelationships among State’s bureaus and offices and between State and other agencies. State may find it difficult to decide where it can transfer, eliminate, or deemphasize its role, yet maintain priority functions and activities. Nevertheless, in our opinion, State can cut some of its expenses by eliminating or reducing the Department’s involvement in some areas. Furthermore, State can also consider alternative ways to recover the costs of the wide array of the products and services it provides to its customers. Reduced budgets will likely compel State to reassess its workload requirements to match its resources. One key area is that of reporting. State is a principal provider of information used by the U.S. government in foreign policy formulation. It reports on key developments, including analyses of the politics, economic trends, and social forces at work in foreign countries, to some 60 federal agencies dealing with national security, intelligence, economic and commercial matters, science and technology, and other issues. While some of the reports could be eliminated or curtailed, it is not clear which are the best candidates because their cost and relative value to the users are unknown. In fiscal year 1996, State is required to produce over 130 congressionally mandated reports. These reports require input from numerous posts worldwide and the use of considerable resources at headquarters and at overseas posts. Streamlining some of these reporting requirements not only could significantly reduce resource requirements at State but also could reduce demands on other agencies that must review some of these reports. The Bureau of Economic and Business Affairs, which recently eliminated over 40 reports affecting up to 110 posts, believes that the requirement to produce country reports on economic policy and trade practices should be eliminated. These reports, required by the Omnibus Trade and Competitiveness Act of 1988, consume the equivalent of 5 staff years at headquarters and 100 posts at an annual cost of at least $500,000. Several officials suggested that State seek legislative relief from this requirement, since the reports’ information is available through other sources. Other reporting requirements that consume substantial resources are the annual country reports on human rights practices and the annual report on science, technology, and American diplomacy. Geographic bureau and post officials reported that they spent large blocks of time producing these reports. For the human rights reports, the time required to prepare an annual report varies from country to country. One overseas official, who said he spends 40 to 50 percent of his time covering human rights issues, estimated that drafting a country summary for the human rights report takes at least 4 weeks. The concern here is that State expends resources on these reports even in countries where human rights issues are not significant. In addition to the 194 overseas posts that contribute to the report, State has indicated that one employee spends roughly 6 months working on the reports, and seven work on the reports full time for 4 months. About 12 more employees review reports within their regional areas of responsibility in the course of their regular workday. According to one State official, if faced with substantial budget reductions, State may have to limit reporting on those countries where human rights abuses are least prevalent. Lessening overlap within State’s offices could reduce costs. For most functions and activities, several offices and bureaus within State headquarters and overseas posts are involved. In February 1995, State completed a preliminary functional study of domestic positions, focusing on potential areas of overlap and duplication. Although the study did not offer definitive answers or management options, it provided some clear examples of potential duplication of functions for further research. For example, 24 political-military positions are in bureaus other than the Bureau of Political-Military Affairs, 59 economist positions are in bureaus other than the Bureau of Economic and Business Affairs, and 9 science positions are outside the Bureau of Oceans and International Environmental and Scientific Affairs. Moreover, the study indicated that there is potential overlap between the Bureau of Intelligence and Research and regional and functional bureaus throughout the Department. Furthermore, geographic bureaus are organized to overlap with many foreign policy and support functions. In a way, they operate as six micro-State departments, basically administering U.S. foreign policy in different regions of the world. Within each of the six bureaus is an Office of the Executive Director with financial management, personnel, and other support positions; a political and economic section or a combined political-economic section with regional responsibilities; and country desks that serve as the liaison between State headquarters offices and overseas posts. State could recover some costs by charging for selected services and products. Charging for such services would force the customer to reassess the relative value of the service. State currently makes only limited efforts to routinely compile data on the cost of its reports and services. Agencies we contacted generally valued the reports and services that State provided, but since costs are unknown, organizations have difficulty making cost-benefit decisions. State has historically charged for consular services but not for its factual and analytical reports, business assistance services, and assistance to overseas visitors. For example, the economic/ commercial section of Embassy Asuncion, Paraguay, handled about 30 interagency requests between September and October of 1995, including requests from USTR; the Federal Aviation Administration; the U.S. Trade and Development Agency; and the Departments of Commerce, the Treasury, and Energy—none of which has representatives in Paraguay. The Consulate General in Sao Paulo, Brazil, supported at least 14 high-level visits in 1995, including one congressional delegation; the Secretary of Commerce and the Secretary of the Treasury; the governors of Wisconsin and Nebraska; the mayor of Orlando, Florida; the Director-General of the U.S. and Foreign Commercial Service; and U.S. Export-Import Bank officials. State is not compensated for staff time and some other expenses associated with these services. Unlike other areas in the Department, State’s goal in Consular Affairs is to recover the total cost of selected services. For example, based on a 1991 cost study that showed the average cost of a passport was about $60, State now charges $65 to issue a passport. In fiscal year 1995, State collected $552 million in consular fees—funds that reverted back to the Treasury. Since 1994, State has been authorized to retain the surcharge collected for machine-readable visas, expedited passport fees, and certain processing fees. For fiscal years 1994 and 1995, State was allowed to keep up to $107.5 million—funds that the Bureau of Consular Affairs indicated it is using to finance improvements to some of its passport and visa programs. Consular Affairs is considering raising fees and charging for services that are currently free and may seek legislative authority to retain more fees to finance its activities. It may also seek legislative authority to make permanent the Department’s retention of machine-readable visa fees. In March 1996, State issued instructions to overseas posts to collect fees for some commercial services. Under a newly authorized program, State will collect fees for commercial services at posts where the Commercial Service is not represented. The potential annual revenues, estimated to be as much as $3,000 per post, will be reinvested to support business assistance activities. In September 1995, State’s Inspector General recommended that the Bureau of Political-Military Affairs consider expanding manufacturer and exporter registration and licensing fees to improve State’s arms export and compliance activities. Under its annual appropriation act, State may retain a stated amount annually, funds that the Bureau has used for information systems to modernize the Bureau’s operations. In fiscal year 1995 State retained $700,000. In comparison, officials from the Nuclear Regulatory Commission, which also charges arms manufacturers and exporters registration and licensing fees, said that the Commission’s licensing process is self-financing from fees that range from $100 to $7,000 per license. While State currently has authority to collect and retain fees for a number of the products and services it provides, it should determine whether there other areas in which it could benefit from additional cost-recoupment authority. The cost-cutting decisions that State may make could adversely affect other agencies. For example, because USTR has only one overseas office in Geneva, Switzerland, it relies on State for field support in developing and enforcing trade agreements. In a similar vein, the U.S. Export-Import Bank, which has no offices overseas, also relies on State for support in analyzing the credit risks of countries and negotiating and enforcing the terms of foreign loans. Moreover, State and agency officials emphasized that if other agencies, such as the Department of the Treasury and the Federal Aviation Administration, downsize their presence overseas due to reduced budgets governmentwide, they are likely to rely on State even more for field support of international activities. Other agencies also rely on State for support from the international conferences and contingencies account, which funds the U.S. government’s participation in about 700 international conferences. Although the State Department coordinates overall U.S. participation, other agencies, such as the Departments of Labor and Transportation and USTR, provide technical expertise and lead delegations where appropriate. When the Congress reduced the account from $6 million in fiscal year 1995 to $3 million in fiscal year 1996 ($2 million for conferences), the State Department told other agencies that it could no longer fund non-State participants. According to agency officials, State’s decision forced agencies to limit or cancel their participation in some conferences, even though in some instances the other agencies, not State, lead the U.S. delegation. To help them plan for anticipated cuts in State support and services, agency officials urged the Department to coordinate its plans with other agencies in advance to allow those affected by State’s decisions sufficient time to make alternate arrangements. Another example of the impact of State cutbacks on other agencies is in the assignment of detailees. State assigns detailees to agencies such as USTR, the National Security Council, and the Department of Defense as well as to congressional offices. Similarly, other agencies, like USAID, assign detailees to the State Department. As State and other agencies face reduced budgets, they may have to consider the costs and benefits of detailing staff to outside assignments. Although State has no immediate plans to eliminate or seek reimbursements for all detailees, the Department recently formed a committee to review the policy. State currently has 136 employees detailed to other agencies. USTR has about 40 detailees from other agencies, 10 of whom are from the State Department, to augment its 163-member staff. Officials there expressed concern that State would first try to reduce costs by eliminating the detailees it provides cost-free. USTR officials told us that, considering their limited budget and staff, such an action would adversely affect their operations. Given the potential that State’s budget will decline, the Department must scrutinize its diverse functions to determine which are critical, decide on appropriate levels of resource investments, and identify areas that could be streamlined. Offices within State share responsibility with multiple U.S. agencies for various overlapping policy issues, which may suggest the potential for consolidating or transferring some of State’s duties. Other agencies may be able to assume greater responsibility in some areas. On the other hand, budget considerations may increase other agencies’ reliance on State for support of their international activities. Therefore, in making these streamlining and management decisions, State will need to consider how cost-cutting decisions within the Department may adversely affect other agencies. State also must reassess its involvement in certain functions and activities. This may include seeking legislative relief from certain congressionally mandated reports or authorization to downgrade the level of certain services. State also needs to consider recouping the costs of products and services it provides to numerous customers. This will require State to maintain cost information (which it does not currently have) in order to weigh the costs and benefits of its products and services and prioritize requirements. More importantly, the availability of cost information could help State identify which functions and activities are most essential and which areas can be eliminated, reduced, or deemphasized should reduced budgets compel such action. In commenting on a draft of this report, State said that this chapter accurately describes State’s foreign affairs activities and the large number of other organizations involved. State made a number of technical suggestions, some of which we incorporated in the report. Regarding our discussion of the human rights reports, State indicated that the Department had taken steps to streamline the report preparation process (such as eliminating unnecessary redrafting and providing updates only, rather than full reports). The Department of Labor expressed its concern regarding streamlining these reports, noting that continuity in coverage is important to the analysis of human rights issues. State also said that it had taken steps to streamline its labor function, which State believes remains important to U.S foreign policy interests. The Department of Labor also objected to eliminating labor attaché positions and asserted that neither political nor economic officers can effectively perform the labor function. We did not recommend that specific functions or activities be curtailed or eliminated. Rather, we identified a number of options that, if implemented, would help enable State to adjust to potential budget reductions. In a period of limited resources, State may have to scrutinize its functions and reassess its involvement in all areas, including human rights reporting and labor functions. Overseas posts consume almost 70 percent of State’s budget. Thus, a fundamental rethinking and restructuring of the U.S. overseas presence offers the greatest potential to achieve substantial budget reductions. State officials believe that certain proposed funding levels could force them to close 50 to 100 of its 252 overseas posts. The post closures and other actions that would result from such a restructuring would mean a reversal of State’s long-standing “universality” policy to maintain a presence in nearly every country and could have other consequences as well. State has proposed the closure of some posts and made efforts to reduce post costs but has made little headway because of internal and external resistance. Establishing an external commission to review proposed post closures and vesting greater authority in the chiefs of mission to achieve cost reductions are two approaches for addressing the problems. The State Department’s overseas posts cost about $1.9 billion annually. This amount includes the personnel costs for State’s overseas U.S. direct-hire and foreign service national (FSN) employees; operating expenses; and the costs for equipment, security, and building acquisition and maintenance. State employs about 7,000 U.S. direct-hire workers and about 9,300 FSNs at its posts. It maintains embassies in most countries’ capitals, consulates in some commercial centers outside capital cities, and missions to international organizations in some countries. The agendas, sizes, and costs of overseas posts vary greatly. Some posts are small and provide basic U.S. representation in a country. Other posts’ operations are comprehensive, with more staff and larger budgets to support a large number of government agencies. The State Department categorizes posts as representational, focused, small, medium, large, and comprehensive. A representational post serves as a diplomatic and consular presence in a host country’s capital or in a major city of the country. Posts in the Central African Republic, Grenada, and Western Samoa are considered representational, and their annual costs range from about $240,000 to about $2.5 million. A focused post not only serves as a diplomatic or consular presence but also functions in one or more specific areas, such as foreign assistance or narcotics control. The posts in Burkina Faso, Mongolia, and Vatican City are considered focused, and their annual costs range from about $1.6 million to about $3.1 million. The responsibilities of posts increase with their designation as small, medium, large, and comprehensive. Responsibilities range from addressing U.S. government policy and support requirements with a small staff at a small post to addressing a full range of intensive bilateral and multilateral issues as well as important and long-standing U.S. domestic issues with a large, diverse staff at a comprehensive post. Costs for medium, large, and comprehensive posts are substantial. For example, it costs $54.2 million annually for a comprehensive embassy and five consulates in Japan. Large and comprehensive posts—those with the greatest responsibilities—understandably absorb a disproportionate share of the total costs of U.S. overseas posts. As shown in table 4.1, 10 of State’s more expensive missions accounted for over $396 million, or over 21 percent of costs in fiscal year 1995. In contrast, 10 of State’s least expensive missions accounted for nearly $14 million, or less than 1 percent of total costs. The overseas posts’ primary mission is to support U.S. foreign policies by promoting political interests; supporting U.S. economic and trade interests; participating in efforts affecting global issues such as the environment, counternarcotics, and labor; issuing visas; and assisting American citizens. Posts also provide a wide range of services, including communications, office and residential building operations, health, equipment installation and maintenance, personnel, budget and fiscal, travel, motor pool, procurement, shipment and customs support, and security in support of mission operations and staff. Figure 4.1 shows the work assignment distribution of State’s direct-hire employees posted overseas. As shown in the figure, support functions consume a significant amount of post resources. In addition to U.S. direct-hire employees working in support positions, State also employs over 9,300 FSNs, most of whom work in support areas. State supports not only its own activities but also those of other agencies; in fact, State supports more of other agencies’ U.S. direct-hire employees—whose number has increased steadily—than its own. Individual posts provide a good illustration of resource allocations and post functions. During our review, we visited posts in six countries that vary in the numbers of staff, work, and budgets (see table 4.2). The posts in Belarus and Paraguay are considered small posts; those in Malaysia, The Netherlands, and Senegal are considered medium to large; and the embassy in Brazil is considered large. Appendix I includes information on the State Department’s overseas political; economic; and commercial, global, and consular activities, focusing on activities at these posts during September and October 1995. These activities may or may not be representative of the posts’ missions as defined by State, but they give insight into the variety of issues the posts must deal with. An important function of overseas posts has traditionally been to maintain contact with foreign governments on political and security issues. Together with the ambassador and the deputy chief of mission, a post’s political section attends to the day-to-day political relations with the host government and attempts to build support for U.S. government policies. It also informs the U.S. government of host country policies and actions that affect U.S. interests. At the posts we visited, the political sections spent most of their time analyzing and reporting on the political situation there, representing the U.S. government and developing contacts within the host country, and supporting official visitors to the country. State posts also support U.S. economic interests overseas, particularly in the areas of business assistance and trade advocacy. At the posts we visited, ambassadors were spending up to one-third of their time on economic and trade advocacy activities, often personally weighing in on behalf of U.S. businesses where appropriate. Posts also support issues of a global nature, such as the environment, counternarcotics, and labor as well as specific issues like international war crimes tribunals. The consular section of an overseas post provides passport and citizenship services; immigrant and nonimmigrant visa services; and services to U.S. citizens overseas, for example, helping Americans in trouble abroad. The sections we visited do a high volume of work with limited staff, collecting fees for many of their services. Although political, economic, and consular functions are the primary missions of the overseas posts, support structures consume a large share of State’s allocated resources. At the posts we visited, 28 to 67 percent of State’s staff were devoted to support duties. According to State officials, the number of personnel needed for support is influenced by several factors, including language constraints, the quality of the local infrastructure, and the availability of skilled labor locally. Table 4.3 shows the support staffing levels in the six countries we visited. The contrast between the number of support staff and staff working on the primary missions is striking at some of the posts. For example, of the 843 employees at the posts in Brazil, 551 work for State. Of State’s 551 employees, 154 work in substantive areas and 397 work in support areas. State has closed and reduced the size of posts in recent years because of funding constraints. For example, since 1991, State has closed 31 posts, most of which were consulates in countries with multiple posts. During the same period, State has also opened 28 posts mostly in the newly independent states of the former Soviet Union. State has a number of options to consider in closing more posts and streamlining post operations. However, such actions are likely to face continued resistance both from within State and externally. According to a 1992 State management study, some State officials favor multiple country accreditation in some regions, where an ambassador operating from a regional post would “circuit ride” to several small, neighboring countries. Regional posts would allow consolidation of staff and other resources in one central location, although cost reductions would be offset to some degree by travel and other related expenses. The study stated that dual or triple accreditation can work if there is reasonable geographic proximity or if convenient air routes are available; no or very small other-agency representation is in the country or countries concerned; and only occasional “tending” for diplomatic reasons is needed. State favorably cited British representation in Africa, where ambassadors are accredited to three or four countries each. The U.S. Embassy in Bridgetown, Barbados, has full diplomatic responsibilities for 7 countries and partial diplomatic responsibility for 14 others in the eastern Caribbean. If the foreign affairs budget declines, Embassy Bridgetown could serve as a model for regionalizing American diplomatic presence. To reduce costs, the State Department could expand multiple country accreditation to other regions, such as the Baltic States, Africa, and countries in South America such as Guyana and Suriname. A second option would be to reevaluate the need for consulates. One Deputy Chief of Mission suggested that while consulates served a variety of purposes during the Cold War, expanded media coverage and improved information and telecommunications technology have lessened the need for them. The Deputy Chief of Mission posed an option to eliminate all consulates to avoid political pressures when closures of specific consulates are debated. Although many consulates are small or moderately sized, some are bigger and more expensive to operate than major embassies. For example, in fiscal year 1995, the consulate in Frankfurt, Germany, cost about $18.8 million—substantially more than the cost of operating the embassy in Ottawa, Canada ($5.9 million), and almost as much as the combined total for the embassy and all six consulates in Canada ($19.6 million). To critically review these and other options, one strategy would be to establish an independent post closure panel like the Defense Base Closure and Realignment Commission—an approach that resulted in decisions to close, realign, or otherwise downsize hundreds of military bases and installations. Although the criteria involved in closing and downsizing overseas diplomatic posts are different, a panel much like the Commission may be useful. A panel could review the needed governmentwide presence at overseas posts as they relate to planned funding priorities. Since many other agencies depend on State’s overseas presence, we believe that such an approach would allow for decision-making based on the need to support both State and non-State activities, consistent with overall U.S. policy interests and priorities as well as available resources. We believe that establishing such a commission would also have the added advantage of mitigating at least some of the pressures and parochial interests that have historically operated to maintain a U.S. overseas presence. The State Department’s traditional policy of maintaining a diplomatic presence in nearly every country—the policy of “universality”—is the primary reason that there are 252 overseas posts, even where U.S. interests are now minimal. Universality is a principle rooted in the Cold War. According to the National Performance Review, during the Cold War, the United States obligated itself to maintain a presence around the world at least equal to that of the Soviet Union. However, despite the end of the Cold War, State remains reluctant to close embassies. In 1992, State decided to help finance new posts in the former Soviet Union by closing posts in other geographic regions, but some of State’s geographic bureaus resisted proposing posts for closure. One bureau argued that it had been established in response to congressional interest in the region and that closing posts would be contrary to congressional intent. Another bureau initially refused to propose that any post be closed because it wanted to retain at least some presence in all countries. Despite the expense, the concept of universality remains firmly rooted in State’s policy. State officials say that even in countries with minimal U.S. interests, a U.S. presence is important to provide international leadership, for example, to influence votes in the United Nations. In September 1995, the Secretary of State reaffirmed his commitment to universal representation, noting that it had been invaluable in extending the nuclear nonproliferation treaty earlier in the year and that it was essential when crises erupt in unexpected places like Burundi and Belarus and when American citizens experience trouble abroad. In addition to internal resistance, the State Department has faced outside resistance to closing posts. In December 1992, for example, State reported that congressional scrutiny, special interests, ethnic groups, and other domestic U.S. political constituencies often emerge to oppose the closing of posts. In its 1992 post closing exercise, State originally proposed closing 20 posts in fiscal years 1993 and 1994. After consultation with congressional members on these closures, State announced that 19 of the 20 would close and then later reduced that number to 17. In July 1995, in its latest attempt to close posts, State notified the Congress that it intended to close 19 posts. It later withdrew six of the posts from the list due to external pressures. For example, as a result of pressure from a member of Congress and the Drug Enforcement Agency, State removed the consulate in Curacao, Netherlands Antilles, from the list. Some of the other five withdrawn posts—in Apia, Western Samoa; Edinburgh, Scotland; Florence, Italy; and Hermosillo and Matamoros, Mexico—have repeatedly appeared on lists of proposed closures. State had expected to reduce its costs by about $12 million a year beginning in fiscal year 1997, the year following the 19 closures. However, after withdrawing six posts from the list, the projected cost reductions were reduced to about $9.3 million. In addition to actual post closures and consolidations, another option is to consider ways to reduce post costs. Since a large portion of overseas costs are structured around staffing, large budget reductions will likely mean further cuts in overseas staff and operating costs. However, embassy managers have little control over resources. As a result, the National Performance Review recommended that a pilot project be set up to give chiefs of mission more authority over all U.S. government resources at posts. This recommendation has not yet been implemented. Because of possible budgetary constraints, State may have to further reduce the size of overseas posts—an even more likely possibility if State remains reluctant to abandon its policy of near universal representation. State could achieve significant cost reductions by making large-scale reductions in overseas staffing. The prospect of further reductions in overseas staffing means that an effective system for allocating overseas staff based on agency goals and objectives is essential. In December 1992, State reported that there had long been concerns that the overseas staffing process was not directly linked with the goals and objectives at specific posts. In 1993, among a number of other organizational changes, the Secretary approved a plan to restructure the overseas diplomatic presence. As part of this effort, State was to implement a new overseas staffing model, under development since 1990, to provide a rational basis for allocating personnel to overseas posts. However, as of January 1996, State had still not implemented the model. In commenting on a draft of this report, State noted that its Overseas Staffing Board met in June 1996 to begin implementation of a model to rationalize overseas staffing. State’s costs would be much less if State made large-scale reductions in overseas staffing. For example, the cost reductions from eliminating only 10 positions from overseas posts could total at least $1.5 million. State’s Office of Budget and Planning estimates the cost of positions overseas from two different perspectives: (1) the costs of adding a new American position and associated start-up expenses and (2) cost reductions from eliminating a position. To estimate the costs of adding a new U.S. direct-hire position to an existing overseas post, the Office uses a figure of $214,400 for the first full year—an amount that includes the average salary, compensatory and incentive allowances and benefits, and other operating expenses. The Office uses a figure of $156,500 to estimate cost reductions from cutting a U.S. direct-hire position overseas. This amount is $57,900 less than the costs of adding a position because it does not include certain operating costs, such as security, which may not decrease when a position is cut. These figures vary by region and by post. For example, the cost reductions from eliminating half of the 67 U.S. direct-hire employees in India and half of the 100 in France could amount to about $5.2 million and $7.8 million, respectively. Chiefs of mission (generally ambassadors) currently control only a small portion of the total resources devoted to their posts’ operations; headquarters offices control the larger share. The National Performance Review report noted that the centralized management of program and staff resources in Washington, D.C., is less efficient than management of resources in the field. For example, some embassy managers told us that they were reluctant to hire FSNs or spouses to serve as secretaries because their salaries and benefits would come from the posts’ budgets. On the other hand, the salaries and benefits of U.S. direct-hire secretaries, which are substantially more, are paid by headquarters offices. Posts have tried to cut costs that are within their control, but cost reductions have been small. For example, the post employees in Paraguay are working longer hours Monday through Thursday and closing at noon on Friday to reduce utility costs—an estimated cost reduction of about $10,000 a year. Similarly, in Malaysia, the post’s air conditioning is turned off at 5:00 p.m. on weekdays and all day on Saturday—a measure estimated to reduce costs between $35,000 and $40,000 a year. The post in The Hague is considering requiring staff to bring their own furniture overseas, initially reducing the post’s costs by an estimated $10,000 a year, with further cost reductions over time. The post now pays to operate a furniture warehouse, but the cost of shipping household goods overseas is paid by headquarters. According to some embassy managers, regulations and long-standing State practices leave little room for innovative approaches to cutting costs. For example, the managers at one post said that current prohibitions governing lease/purchase arrangements do not allow for entrepreneurial approaches to obtaining U.S. government real property. State officials also argue that they lack sufficient authority over other agencies’ resources at posts. A variety of laws, State Department directives, and presidential letters of instruction give chiefs of mission the authority over State’s and other agencies’ staffing decisions. Nevertheless, the National Performance Review noted that National Security Decision Directive-38 (NSDD-38), issued June 2, 1982, makes the authority of the chief of mission more theoretical than real in many cases. NSDD-38 allows other agencies to challenge a decision made by the chief of mission. According to State officials, headquarters management often does not support chiefs of mission in bureaucratic battles with agencies that resist their attempts to limit or reduce their staffs. Moreover, when a chief of mission successfully limits an agency’s staff growth at one post, the agency sometimes manages to increase its presence in a nearby country instead. For example, although the Ambassador in Brazil negotiated with the Internal Revenue Service to close its office in Sao Paulo because he, along with Inter-American Affairs Bureau management, agreed that Internal Revenue Service operations could be effectively managed in the United States, the Internal Revenue Service opened a new office in Chile. Our prior work has shown that other agencies’ staffs have increased steadily, while State’s has decreased slightly. Even if chiefs of mission successfully exercise authority over staff levels, they do not have practical day-to-day control over their personnel and fiscal resources. According to the National Performance Review, because of the diffusion of responsibility, authority, and operational prerogatives among the various overseas agencies, chiefs of mission lack the flexibility to shift resources to respond to changing requirements or evolving foreign policy objectives. The Ambassador in Senegal managed to cut 14 positions in Dakar, 13 of which were from other agencies, with estimated overall cost reductions of about $3 million. He believes, nevertheless, that he could achieve greater cost reductions if he had day-to-day operational control over total U.S. government funding and resources at his post. A review of the post’s mission and structure, led by the Ambassador, concluded that the explosion of different administrative, financial, and information systems among the various agencies overseas calls for consolidation of support functions, privatization, and other changes that can come about only with expanded chief of mission authority. For example, the agencies in Dakar have four different communication systems. Also, 7 independent financial operations and 38 staff positions are devoted to financial management functions there. The study indicated that there are similar redundancies elsewhere and that streamlining and cost reductions could be achieved in areas such as real estate and facilities management, security, and quality-of-life benefits. (Options for cost reductions in some of these areas are explored in more detail in ch. 5.) To strengthen the ability of chiefs of mission to achieve significant cost reductions, in September 1993 the National Performance Review recommended a pilot project giving U.S. ambassadors at selected posts more authority over all U.S. government resources at their posts, including staff. There may be some potential negative consequences to expanding ambassador authority. For example, a chief of mission and a federal agency may have differences of opinion concerning the types and levels of resources needed to sustain agency activities. State has not initiated action on this recommendation. State officials told us the Department cannot move on this recommendation until the National Performance Review/Vice President’s staff propose the needed legislation to the Congress. However, National Performance Review staff told us that proposing legislation is the agency’s responsibility. State does not want to initiate action because it believes the pilot program should be viewed as an administration initiative to gain the interagency support such a project would require. State has also told us that anticipated congressional opposition to such a project has inhibited their submission of a legislative proposal. Consistent with the recommendation, in November 1995, the U.S. Ambassador to Senegal submitted a study concluding that the chiefs of mission need more authority and control over U.S. government programs, personnel, and resources under their management. He asked that the Under Secretary for Management convene the Vice President’s Interagency Council to obtain agreement from its members to designate selected posts to test the study’s conclusions. State management has been reviewing the proposal. Because of fiscal constraints, State may not be able to continue to maintain its vast network of embassies and consulates as they are configured today. The end of the Cold War coupled with improvements in communications capabilities provide the opportunity to rethink how the U.S. government’s overseas presence is structured and develop new ways of operating that could increase efficiency and reduce costs. Balanced, thoughtful decisions must be made to ensure that U.S. interests are well served overseas and Americans are protected within the constraints of reduced funding. If future funding levels require the State Department to close and reduce the size of posts, the Congress may wish to establish an independent panel to review State’s proposals in view of (1) the potential financial benefits to the U.S. government, (2) the impact on governmentwide interests and the many agencies that depend on State’s services, and (3) the potential opposition to closing posts. Although the criteria involved in closing and downsizing overseas diplomatic posts are different, a panel much like the Defense Base Closure and Realignment Commission established to review military installations may be useful. Also, if the Congress believes that ambassadors’ authority over U.S. government resources should be expanded to reduce spending, it could explore with the executive branch how a pilot program, such as the one recommended by the National Performance Review, could be structured and implemented. State said that the chapter did a good job of describing State’s experiences in proposing posts for closure and outlining some approaches to reducing the number of posts. State asserted that a diplomatic presence in all countries with which the United States has diplomatic relations continues to be important today. State also emphasized that Americans traveling abroad as well as other U.S. agencies depended on the network of embassies and consulates. These factors will need to be carefully considered in determining what overseas presence is essential and affordable. USAID objected to expanding an ambassador’s authority over the programmatic direction and funding of overseas programs, even on an experimental basis. Increasing ambassadors’ authority, on a pilot basis, is one option we offered as a means to reduce costs. We stated that increasing ambassadors’ authority could have negative consequences. The State Department spends two-thirds of its budget on support operations. In response to reduced operating budgets, State is pursuing or studying several options to reduce its support costs. These include (1) recouping the full cost of support provided to other agencies overseas, (2) hiring more U.S. family members to fill overseas staffing positions, (3) increasing employees’ payments for medical services, (4) increasing the length of overseas tours, and (5) reducing its costs for Marine guard detachments at overseas posts by deactivating certain units or shifting its cost to the Department of Defense. We identified several additional options State could implement if it has to adjust to potential budget cuts. These options include (1) reducing support staffing levels in headquarters, (2) reviewing employees’ benefits and allowances, (3) expanding the use of foreign nationals in support positions at overseas posts, and (4) disposing of excess and underused properties overseas. Over the long term, State hopes to reduce its operating expenses through business process reengineering and the outsourcing of certain support functions. In both areas, however, only limited progress has been made. In fiscal year 1995, State allotted $1.8 billion, or about 65 percent of its budget, to domestic and overseas support operations. These funds provided support for both Department staff and employees from other federal agencies. Centrally funded operations account for approximately $1.1 billion of the support budget and cover central administration costs and the costs of running several regional centers that provide financial and information management services to overseas posts. The geographic bureaus control the remaining portion of State’s support budget, which is largely used to fund the salaries of those employees in support positions. State is reimbursed for some support costs under the Foreign Affairs Administrative Services (FAAS) system, which attempts to allocate costs among agencies based on workload. Under FAAS, other agencies at overseas posts reimburse State for the incremental cost of providing some support services. In fiscal year 1995, State received an estimated $187 million in FAAS reimbursements from other agencies. These funds were primarily used to pay the salaries of FSNs and personal service contractors hired to meet the administrative needs of other agency staff. According to State’s Chief Financial Officer, however, these payments only partially covered the increased costs of hosting staff from other federal agencies. State officials recognize the need to reduce the Department’s support costs. In a March 1994 memorandum, the Under Secretary for Management notified managers throughout the Department that significant and sustained cuts in support costs must be made to allow State to continue to operate with declining budgets. Under the direction of the Under Secretary for Management, State has launched a number of initiatives designed to reduce its support costs. The most significant of these initiatives involves an attempt to recoup certain support costs that were previously unreimbursed under the administrative cost-sharing system used by State and other federal agencies overseas. Under FAAS, agencies at overseas posts reimburse State for the incremental cost of providing some support services. State believes that FAAS reimbursements do not capture the Department’s full costs for support to other agencies, and as a result State subsidizes their overseas operations. The President’s Management Council (a federal managers’ forum established under the National Performance Review) also expressed concerns about the complexity and equity of FAAS, noting that FAAS is primarily a reimbursement mechanism—not a system for rationalizing delivery of overseas administrative services—and it does not address quality and delivery of services. To address these problems, the President’s Management Council developed the International Cooperative Administrative Support Services (ICASS) System. Under ICASS, greater responsibility and authority for managing resources and making decisions about paying for common administrative support service will be delegated to the posts. Posts will be encouraged to explore additional options for obtaining administrative support, rather than relying solely on State. These options could include allowing other agencies to provide support functions, using commercial contractors, or introducing improved technologies. In addition, costs are to be clearly delineated by agency for all post- and Washington-related services. Since ICASS is designed to better rationalize the delivery of overseas administrative services, it should serve to help identify specific steps that can be taken to streamline overseas operations, reduce the costs of administrative services, and make better use of information systems and communications technology. In fiscal year 1997, State plans to implement ICASS administrative procedures worldwide, but State and other agencies will fund overseas support operations using FAAS funding practices. The ICASS cost recovery system will not go into effect until fiscal year 1998. State views ICASS as a mechanism to equitably spread the full cost of providing overseas support across all agencies with an overseas presence. State estimates that in terms of cost recovery alone, implementing ICASS would allow the Department to spend $108 million less per year for support to other agencies. Redistribution of costs currently covered under FAAS would reduce costs to State by $15 million, and billings for new cost items not currently covered under FAAS would reduce cost to State by an additional $93 million. From a governmentwide perspective, these funding shifts alone do not represent cost reductions. Some officials thought that once agencies had to pay the true cost of placing personnel overseas, many would decide to reduce their overseas presence. Such reductions would reduce overall U.S. spending for overseas support. State is also exploring how to bill agencies for costs that were previously provided at no charge. For example, the Congress has tasked State’s Diplomatic Telecommunications Service Program Office with devising a system for charging agencies the actual cost of providing overseas communication links. State is also considering billing agencies for foreign building operations, overseas schools, security officers, as well as some incremental domestic support costs associated with other agency operations overseas. State has not estimated the potential revenue gain from charging for these and other services. Other agencies support the new system, noting that ICASS represents a cultural change in the way overseas operations are managed. The move to customer-based service standards and principles presents an opportunity for cost reductions from greater efficiency and interagency coordination. Agencies also applaud the move to grant more decision-making authority to the posts as to how they are managed. Major disagreements, however, remain to be resolved over which costs should be covered under ICASS and how these costs should be allocated among participating agencies. Every agency we spoke to expressed concerns about paying higher costs under ICASS and funding to cover those costs. Some agencies would like State to transfer funds from its budget to them to offset the expected higher costs under ICASS. Agencies also expressed concern that (1) a financial system to track costs and transfers of funds has not been field-tested; (2) the pilot tests have not lasted long enough to evaluate ICASS capabilities; and (3) details for handling different budget cycles, financial systems, and requirements across agencies have not been resolved. State says it will be able to address these concerns in time for worldwide implementation. State Department officials also noted that the Department has to comply with a congressional mandate to obtain full recovery of each department and agency’s costs in fiscal year 1997. ICASS has merit and the potential to change the culture of overseas operations. However, ICASS implementation will likely lead to a protracted debate over funding issues within the executive branch. Since ICASS implementation will affect overseas funding and staffing decisions at over 35 U.S. government agencies, it will require significant, high-level support within the executive and legislative branches for full implementation. In October 1995, a task force established by the Under Secretary for Management recommended that State increase the hiring of family members to help meet overseas staffing needs. State could cut costs by an estimated $105,000 for each family member hired to fill a position normally reserved for a junior officer. A State official noted that the Department will hire only about half of its normal junior office class in fiscal year 1996. This will heighten the importance of the family member program to help meet overseas staffing needs. In commenting on this report, State noted that not all junior officer positions can, or should, be filled by family members, since these positions provide valuable career training for Foreign Service officers. To provide adequate health care for its employees worldwide, State operates a health clinic in Washington, D.C., and 139 health units throughout the world, providing many free services to its employees and employees of other foreign affairs agencies. State’s main health clinic has diagnostic and laboratory facilities, including unique capabilities in the area of tropical and parasitic diseases, and provides the physical examinations required before staff and family members are posted overseas. State also has a clinic at the National Foreign Affairs Training Center. The overseas programs provide free occupational health-related services (such as first aid for minor on-the-job injuries) and primary medical care. The Inspector General reported that there were about 79 Foreign Service health-care professionals, supplemented by local hire nurses and contract physicians, working in overseas posts as of January 1994. According to a report by State’s Inspector General, State spent about $19 million in fiscal year 1995 to provide uncompensated medical care to its employees. The Inspector General recommended that State shift up to $13 million of these costs to private health insurers responsible for reimbursing employees for the costs of these services. The Inspector General recommended that the remaining $6 million in uncompensated care not be transferred. The Inspector General proposed that the Department absorb this cost because it represents patient copayments and deductibles that impose an administrative burden to track and recoup. Also, different living conditions among posts could result in varying illness rates, and employees should not be penalized for working in less desirable locations. A provision in the fiscal year 1996 authorization bill would enable State to collect reimbursements for medical services rendered. A senior State official noted that additional cost reductions could be achieved if the Department contracted for some services currently provided by State headquarters medical clinics and if routine medical services, such as physical examinations, were tailored to an employee or dependent’s age, sex, and known risk factors. The same official estimated that these actions could reduce costs by an estimated 30 percent of the annual $4.5 million cost of clinical services, or about $1.4 million. Overseas, State ensures that its employees receive care by authorizing and approving payment for inpatient medical care and related outpatient treatment of eligible overseas U.S. citizen employees and their dependents. It then relies on its employees to file insurance claims and forward reimbursements to State. In August 1995, State’s Office of Medical Services and the Bureau of Finance and Management Policy began tracking whether its employees filed such claims and whether reimbursements were turned over to the Department. Posts issue authorizing services and report to the Office of Medical Services the obligation number, insurance carrier, amount expended, and information on the service provider. The Office of Medical Services issues fund cites for all hospitalizations overseas and tracks the recovery of medical insurance benefits. As of January 1996, of the $1.5 million expended for overseas hospitalizations, State had collected about $1.2 million (77 percent) and expected the final recovery rate to exceed 80 percent. In 1996, employees will be held liable for the cost of care, and debts will be turned over to the Bureau of Finance and Management Policy for collection through State’s debt collection procedures. While embassies have the same guidelines for pursuing reimbursements for medical services, State does not track expenses for other federal employees that rely on it for medical services. Therefore, it does not know whether other agencies’ employees reimburse embassies. In August 1992, we recommended that State begin tracking costs for other agencies and whether these costs were reimbursed by private insurers. A State medical official agreed that tracking could be done but that the tracking workload would increase by 80 percent. State has not attempted to quantify the costs and benefits of tracking payments from other agencies’ overseas staff, but it appears likely that State would achieve cost reductions similar to those achieved by tracking reimbursements from State employees. State estimated that it spent $68.7 million in fiscal year 1995 for travel for post assignments. A 1993 State study of its policy on tours of duty projected cost reductions of about $2 million annually by extending tours to 3 years for all overseas posts (tours at hardship posts are currently 2 years). Longer tours at all posts could further reduce costs. With longer tours, State could also increase productivity and effectiveness because staff need several months to adjust to foreign local cultures, languages, and environments and to master a new job and about 9 months to go through the reassignment process at the end of the tour. A 2-year tour can therefore leave relatively little time for peak performance. Although costs could be reduced by extending tours of duty, State has concluded in the past that the negative impact on employees’ morale and State’s assignment process did not permit extending tours of duty. However, State said it is now reexamining the issue in light of the current budget situation. In fiscal year 1994, State paid its share—$22 million—for 125 Marine guard detachments at 112 embassies, 11 consulates general, one U.S. mission, and one U.S. interests section. The guards are jointly funded by the Marine Corps and the State Department and primarily safeguard classified documents. As recommended by the National Performance Review and to reduce costs, Marine guard detachments have been deactivated at some posts where the level of classified operations do not warrant 24-hour cleared American presence. The State Department and the intelligence community are reviewing whether some detachments could be deactivated or transferred to other locations. In lieu of Marine guards, State may have to expend funds to increase security safeguards—for example, by providing better real time surveillance equipment at the chancery. But, on balance, deactivation of guard detachments could reduce cost where security conditions permit. For fiscal year 1997, the executive branch has proposed that funding for the Marine guard detachments shift from State to the Department of Defense. If this occurs, State will no longer spend its funds on the Marine guard detachments. Other options State may wish to consider in the future include (1) reducing support staff levels in Washington, (2) reviewing employee benefits and allowances, (3) expanding the use of FSNs in support positions at overseas posts, and (4) selling excess or underused overseas property. According to a number of management reports, there may be opportunities to reduce staffing in headquarters support offices. A senior State official in Personnel noted the Department has been slow to initiate a detailed review of support staffing levels in Washington. However, a preliminary assessment of State’s domestic staffing, issued in February 1995, noted a basic concern that support functions account for almost 60 percent of State’s domestic workforce. Similar concerns over the proportion of domestic staff devoted to support functions were raised in earlier management studies at State. The State 2000 Management Task Force report issued in December 1992 noted that about 60 percent of headquarters salaries and expense resources are expended for centrally managed support staff and that this level was much too high. According to a 1993 State report, some estimates project that Department staffing in Washington exceeds optimal levels by 15 to 20 percent in view of the low productivity of many organizations and personnel. The report recommended that a baseline review of headquarters support staffing be conducted and noted that potential long-term cost reductions from such a study could be substantial, as each 5-percent reduction in personnel would reduce the Department’s costs by $46 million. If substantial reductions in appropriations occur, State may have to review the benefits and allowances it provides to its employees, even though this could involve difficult and painful decisions. For example, costs could be reduced by millions of dollars if employees were required to pay for a portion of their overseas housing costs. State noted that because benefits and allowances are payable to all civilian government employees working overseas, any changes would require interagency support and may require legislative action. State officials also noted that a consideration would be the potential negative impact on employee morale and the impact on State’s ability to attract and retain staff to effectively carry out its mission. We agree. Overseas allowances and benefits are authorized by statute for U.S. federal civilian employees stationed in foreign areas. There are two purposes for granting allowances and benefits to U.S. staff employed overseas. First, they can serve as reimbursement for extraordinary living costs to prevent employees from being financially penalized for working overseas. Second, they can serve as recruitment and retention incentives. Benefits and allowances provided to Foreign Service employees working overseas include free housing and utilities, hardship post payments, danger pay, cost-of-living allowances, education allowances, and separate maintenance allowances. These benefits and allowances offer one potential area for cost reduction efforts. According to the Director of State’s Office of Allowances, the only comprehensive and recent review of Foreign Service benefits and allowances was conducted in 1994 by a private firm under contract to USAID. In a January 1995 memorandum to the Director General of the Foreign Service, USAID’s Deputy Assistant Administrator for Human Resources noted that this study found that (1) the Foreign Service compensation package was somewhat below that received in the private sector; (2) the Foreign Service benefits package was comparable to the private sector, except for retirement where the government is more generous, and (3) Foreign Service allowances were comparable to the private sector except for housing allowances where the private sector generally deducted a home-country norm from the housing allowance so that employees do not get free housing overseas. USAID’s consultant concluded that by paying only for excess housing costs, private industry, without any apparent disruption to programs, avoided substantial costs that the government pays. The consultant further concluded that USAID employees’ benefits of receiving essentially free housing and utilities amount to about 12 to 15 percent of their base pay. We also noted that Canadian Foreign Service officers are expected to contribute to a portion of their overseas housing costs based on Ottawa housing costs and on their salary levels and household sizes. Likewise, Australian Foreign Service officers are expected to contribute toward their overseas housing costs based on a published schedule. The Director of State’s Office of Allowances noted that other countries, such as Japan, France, and Germany, also have a rent-share arrangement with their employees. Like the United States, however, other countries such as Great Britain and Ireland, fully reimburse overseas housing costs—up to a set limit. The administrative section at overseas posts is generally headed by an administrative counselor or officer who is assisted by several Foreign Service officers who manage administrative subunits and oversee a staff comprised mainly of FSNs. In response to budget pressures, State could increase its use of FSNs to replace Foreign Service specialists working in senior support positions. Employment of FSNs is less costly than employment of Foreign Service officers because FSNs do not receive the benefits and allowances payable to Foreign Service employees and generally are paid lower salaries than Foreign Service employees. In addition to the cost reductions that would result from the increased use of FSNs, this practice would have two other advantages. First, State would not have to periodically rotate hundreds of Foreign Service specialists between headquarters and overseas posts. Second, an FSN-driven support structure would eliminate the significant learning curve Foreign Service specialists face at a new post. State is reluctant to expand the use of FSNs because of security and fiduciary concerns. These are valid concerns that would be compelling without regard to oversight costs. However, cost-benefit and risk management principles suggest that some level of increased risk may be acceptable in return for significantly reduced oversight costs. Given potential budget reductions, State would have to carefully weigh the potential cost reductions resulting from increased use of FSNs against the perceived risks. Although security and fiduciary matters are a serious concern, the following observations show that the increased use of FSNs may be feasible at some posts and that the associated risks may be manageable or could be mitigated. In Malaysia, the embassies of Australia, Canada, and Great Britain employ foreign nationals to manage administrative sections under the overall supervision of a diplomatic officer. In comparison, the U.S. Embassy, with one exception, used American Foreign Service officers to head up the administrative subunits. At some U.S. posts, FSNs are already employed to head some administrative sections, such as personnel, working under the general supervision of U.S. administrative counselors or officers. FSNs do not have access to sensitive records and information because embassies have secure areas they are not permitted to enter. The number of sensitive records is presumably small relative to support functions typically handled in budget and fiscal, general services, and personnel offices. Posts generally have two distinct computer systems, one classified and one unclassified. FSNs do not have access to the separate classified system. The Inspector General’s procedures for periodic inspections could be modified to provide additional oversight of FSNs that handle U.S. funds. Also, the use of standard internal control procedures, such as the division of duties and mandatory vacations, could help to ensure that funds are managed responsibly. Nonetheless, we agree that certain Foreign Service specialist positions would not be appropriate for an FSN, given the classified or highly sensitive nature of the position. These positions include communications specialists, regional security officers, regional medical officers, foreign building operations officers, technical staff working with classified communications systems, and Foreign Service secretaries assigned to work with classified information. FSNs could replace the more costly existing staff as budget and fiscal officers, general services officers, information management specialists working with unclassified computer systems, personnel officers, regional medical technologists, and Foreign Service nurses. We estimate that State currently has about 500 Foreign Service specialists in these six job categories in its overseas posts. This figure does not include rotational positions in Washington. If State replaced these overseas specialist positions with FSNs, the Department’s costs would be reduced by approximately $53 million annually in allowance and benefit costs. The State Department has over $10 billion in real estate at its overseas posts. Some of this real estate is excess and underused; State has identified properties for potential sale valued at $467 million as of October 1995. Moreover, as we have reported, State also has millions of dollars in potentially excess real estate at closed posts that are not included in this amount and could be sold. The Inspector General, individual embassies, and State’s Office of Foreign Building Operations have also identified some of these excess properties. However, because of internal and external pressures, State has been slow in disposing of these properties. Selling these properties would not only generate revenues but could also significantly reduce maintenance costs. In our most recent report, we recommended improvements in the State Department’s procedures to identify and sell excess real estate. For example, we recommended the establishment of an independent panel to make recommendations regarding the sale of unneeded real estate, to ensure that the taxpayers’ interests and the financial needs of the State Department are considered. State’s Under Secretary for Management has called for and supported individual reengineering efforts throughout the Department. Some of State’s managers have launched reengineering and reinvention exercises in an attempt to improve services and lessen costs. Examples of such efforts include State’s attempts to reengineer the Foreign Service transfer process, major personnel management reforms in such areas as pay broadbanding for FSNs, and an ongoing reengineering of the logistics function. Under State’s Strategic Management Initiative, the Department attempted to gauge the advisability of outsourcing a limited number of support functions. These are positive signs of a desire for long-term reform of how support services are procured and managed at State. However, they represent just a beginning, since several major business functions remain to be reengineered or analyzed in terms of outsourcing potential. State’s use of an outdated, proprietary operating system that is largely tied to computer purchases made over 15 years ago complicates current reengineering efforts. The poor condition of the information technology platform has been the subject of repeated discussions both within and outside the Department. State officials are frank in their assessments of the numerous deficiencies in current systems and acknowledge the critical need for a system that offers data users the flexibility and tools they need to accomplish the Department’s work. State launched a modernization program in 1992; according to State officials, full implementation of the program will be delayed by funding constraints. Information management resource problems have made it difficult to effectively reengineer processes. For example, State selected the Foreign Service’s transfer process as a reengineering test case because the process had 117 steps, 23 forms, and a number of dispersed operations. Over 2 months, steps in the process were charted and brainstormed, and best practices in the private and public sectors were reviewed. The final report recommended a streamlined system that would enhance customer satisfaction and reduce costs. The team concluded that by improving computer interconnectivity and empowering employees, the number of full-time employees involved in the transfer process could be reduced from 24 to perhaps 4 to 5 permanent positions, supplemented by private contractor assistance. However, a year after completing the work, two team members told us that none of these positions had been eliminated because State does not have the information management tools to make the system work as designed. In a December 1994 report, we compared several private and public leading organizations’ best practices in applying information technology to improve performance to State’s practices and found that the Department was deficient in several areas. Our report noted that State had no integrated information management plan that identified goals and objectives and linked information resource management projects to them;the information resource management and budgeting processes at State were not closely linked; and State had not appointed a chief information officer to serve as a bridge between top management, line managers, and information support professionals. The Under Secretary for Management has begun to take steps to improve State’s information resource management capabilities, including forming an information technology review board of senior department managers. In May 1995, the Under Secretary for Management established an acting chief information officer position. In May 1996, State appointed a permanent chief information officer to comply with the mandate of the 1996 Information Technology Reform Act. Finally, State is currently working on its latest 5-year strategic information resource management plan (1996-2000), which should be issued in 1996. The National Performance Review encouraged federal agencies to identify their core missions and unique competencies and to outsource functions that can be provided more effectively and at a lower cost by other agencies or private companies. State has not done this in any comprehensive manner, even though the Strategic Management Initiative called for outsourcing studies of telecommunication services offered through the Diplomatic Telecommunications Service Program Office, the payroll function, vendor and pension payments, the Foreign Service medical examination program, and the training courses offered at the National Foreign Affairs Training Center. An official from the Diplomatic Telecommunications Service Program Office told us that the office was never formally tasked with conducting this study and thus did not respond to the recommendation. The Office of Medical Services also did not prepare an outsourcing study, although an official from that office said a study will soon be initiated. Outsourcing studies of State’s payroll operations and vendor and pension payments processes were contracted to an outside consulting firm. The results of these studies should be available shortly. The only published outsourcing studies available at the time of our review involved a limited number of courses offered by the National Foreign Affairs Training Center. In each case, with certain minor modifications, the internal review team concluded that the Center’s offerings were cost-effective. State has not yet considered the numerous options for outsourcing many of its noncore activities, particularly data processing and administrative activities. Outsourcing options can be found in both the private and public sector. Private contractors are already providing a wide range of data processing and administrative services, including payroll, personnel management, and financial management services, and have contracts with several federal agencies. According to one private firm, its worldwide network of data processing centers could meet State’s domestic and overseas data processing demands with relative ease. Officials from the same company cited several examples of outsourcing contracts with federal agencies, including a 10-year contract with the Federal Aviation Administration to manage its computer resources nucleus project and a 5-year contract with the Immigration and Naturalization Service to manage its Information Technology Partnership Program. Also, another country’s internal revenue service has contracted with this firm to assume all of its information technology responsibilities, including computer systems, systems development and integration, systems maintenance, hardware and software procurement, hardware maintenance, hardware and software installation, and information technology project management. Transferring with these responsibilities are three development centers, two accounts offices, nine data centers, and 2,100 employees who are now the contractor’s employees. Another outsourcing option available to State is the work of the Defense Logistics Agency Administrative Support Center, which been designated a reinvention laboratory under the National Performance Review. The Center provides a wide array of administrative services to Defense and other nongovernment agencies, state and city governments, and private firms. Customers pay only for actual services provided. According to its latest survey of selected services offered by the Center, its costs were significantly lower than prices in the private sector. The Director explained that the Center does not operate on a profit margin and can leverage Defense’s worldwide infrastructure to offer services at the lowest costs possible. According to the Director, his staff periodically conduct surveys of best practices in private industry to stay abreast of the latest management techniques. The successful implementation of some or all of the options discussed in this chapter could substantially reduce State’s support costs. Congressional approval and negotiations with other agencies may be required to implement some of these options. In addition, they would require the potential displacement of hundreds of current employees and the loss of some employee benefits, with potential adverse consequences for employee morale and productivity. However, they represent the types of changes that would enable State to operate with less resources. State expressed concerns regarding a number of options discussed in this chapter, pointing out the drawbacks. Regarding increased use of FSNs in certain positions, State emphasized that American personnel are required to minimize fiduciary concerns. We did not recommend increased use of FSNs, but we believe this option should be considered. We share some of State’s concerns, but we believe that risk management principles suggest that some level of increased risk may be acceptable in return for significantly reduced costs. Given potential budget reductions, State would have to carefully weigh the potential savings resulting from increased use of FSNs, as well as implementation of other cost reduction measures, against the perceived risks. State said the chapter clearly described ICASS, which State believes is its most important undertaking involving overseas support costs. USAID and USTR emphasized that increasing reimbursements for administrative support at overseas posts may require additional funding for agencies other than State to cover certain costs they may not have budgeted for in the past. | Pursuant to a congressional request, GAO reviewed the Department of State's reform and cost-cutting efforts and identified options that would enable State to adjust to reduced budgets. GAO found that: (1) State does not have a comprehensive strategy to restructure and downsize its operations to meet potential funding reductions; (2) State has reduced its staff and implemented some cost reduction measures, but it has been reluctant or unable to reduce its overseas presence and the scope of its activities or change its business practices to accommodate proposed budget reductions; (3) State believes that substantial downsizing would severely hamper its achievement of U.S. foreign policy goals and irreparably harm U.S. interests; (4) because of expected governmentwide budget constraints and congressional and Office of Management and Budget (OMB) proposals for decreases in State funding, State is unlikely to receive the level of funding needed to maintain its existing activity; (5) State could reduce costs by reducing duplication among its bureaus and with outside agencies with which it shares program responsibility, streamlining or eliminating some informational reports, eliminating or consolidating certain personnel positions, and recovering some service costs from users; (6) State has the opportunity to significantly reduce costs by closing or reducing the size of overseas posts, which consume about 70 percent of State's budget; (7) State could reduce support costs, which constitute two-thirds of its budget, by recouping support costs from other agencies, hiring more U.S. family members for overseas positions, adjusting employee benefits and allowances, increasing tour lengths, reducing costs for Marine guards at overseas posts, reducing headquarters support staff, using more foreign nationals in support positions, disposing of unneeded overseas real estate, and reengineering and outsourcing administrative functions; and (8) expanding the authority of chiefs of mission over all U.S. government fiscal and staffing resources at overseas posts would be one way to accomplish cost reductions. |
The NSLP is designed to provide school children with nutritionally balanced and affordable lunches to safeguard their health and well-being. The program, administered by the U.S. Department of Agriculture’s Food and Consumer Service, is available in all 50 states, the District of Columbia, and the U.S. territories. The schools participating in the NSLP receive a cash reimbursement for each lunch served. In turn, the schools must serve lunches that meet federal nutritional requirements and offer lunches free or at a reduced price to children from families whose income falls at or below certain levels. For school year 1995-96, the schools were reimbursed $1.795 for each free lunch, $1.395 for each reduced-price lunch, and $0.1725 for each full-price lunch. Furthermore, for each lunch served, the schools receive commodity foods—14.25 cents’ worth in school year 1995-96. The Department provides a billion pounds of commodity foods annually to states for use in the NSLP. States select commodity foods from a list of more than 60 different kinds of food, including fresh, canned, and frozen fruits and vegetables; meats; fruit juices; vegetable shortening and oil; and flour and other grain products. The variety of commodities depends on the quantities available and market prices. According to the Department, federal commodities account for about 20 percent of the food in the school lunch program. Through school year 1995-96, the schools were required to offer lunches that met a “meal pattern” established by the Department. The meal pattern specified that a lunch must include five items—a serving of meat or meat alternate; two or more servings of vegetables and/or fruits; a serving of bread or bread alternate; and a serving of milk. The meal pattern was designed to provide nutrients sufficient to approximate one-third of the National Academy of Sciences’ Recommended Dietary Allowances. Effective school year 1996-97, the schools participating in the program will be required to offer lunches that meet the Dietary Guidelines for Americans. Among other things, these guidelines, which represent the official nutritional policy of the U.S. government, recommend diets that are low in fat, saturated fat, and cholesterol. In meeting these guidelines, the schools may use any reasonable approach, within guidelines established by the Secretary of Agriculture, including using the school meal pattern that was in effect for the 1994-95 school year. All students attending the schools that participate in the NSLP are eligible to receive an NSLP lunch. In fiscal year 1995, about 58 percent of the eligible students participated in the program. About 49 percent of the participating students received free lunches, 7 percent received reduced-price lunches, and 44 percent received full-price lunches. The students who do not participate in the program include those who bring lunch from home, eat off-campus, buy lunch a la carte at school or from a school canteen or vending machine, or do not eat at all. Concerns about plate waste prompted the introduction into the NSLP of the offer versus serve (OVS) option more than a decade ago. Under this option, a school must offer all five food items in the NSLP meal pattern, but a student may decline one or two of them. In a school that does not use this option, a student must take all five items. All high schools must use the OVS option, and middle and elementary schools may offer it at the discretion of local officials. According to a 1993 Department report, 71 percent of the elementary schools and 90 percent of the middle schools use the OVS option. Cafeteria managers varied in the extent to which they perceived plate waste as a problem in their school during the 1995-96 school year. Ninety percent of the managers provided an opinion on plate waste. The majority of those with an opinion did not perceive it as a problem. However, 23 percent of those with an opinion reported that it was at least a moderate problem. Figure 1 presents cafeteria managers’ perceptions of the extent to which plate waste was a problem in their school. By school level, we found some variation in cafeteria managers’ perceptions of plate waste. As figure 2 shows, managers at elementary schools were more likely than those at middle or high schools to report that plate waste from school lunches was at least a moderate problem during the 1995-96 school year. By school location and by schools serving different proportions of free and reduced-price lunches, we found no statistically significant differences in cafeteria managers’ perceptions of plate waste. We also considered the extent to which cafeteria managers perceived plate waste as a problem by asking them to compare the amount of waste from school lunches with the amount of waste from packed lunches from home. Sixty-three percent of the managers were able to make this comparison. Of these, 79 percent believed that the amount from school lunches was less than or the same as the amount from packed lunches. (See fig. 3.) Cafeteria managers reported large variations in the amount of waste from eight different types of food that may be included as part of the school lunch. For each food type, managers reported how much of the portions served, on average, was wasted. On the basis of the managers’ responses, we estimate that the average amount wasted ranged from a high of 42 percent for cooked vegetables to a low of 11 percent for milk. Figure 4 shows our estimate of the average percent of waste for each of the eight food types. By school level, the amount of waste varied for all food types except canned or processed fruits. In general, the waste reported for each food type was highest in the elementary schools and lowest in the high schools. (See fig. 5.) By school location, the amount of waste varied for three food types—cooked vegetables, raw vegetables/salads, and milk. For example, for each of these food types, the urban schools reported more waste than the rural schools. (See fig. 6.) By schools serving different proportions of free and reduced-price lunches, the average amount of waste varied for four food types—raw vegetables/salads, fresh fruits, canned or processed fruits, and milk. (See fig. 7.) When responding to a list of possible reasons for plate waste at their school, the cafeteria managers most frequently selected a nonfood reason—“student attention is more on recess, free time or socializing than eating.” When responding to a list of possible ways to reduce plate waste, the managers most often viewed actions that would involve students, such as letting students select only what they want, as more likely to reduce plate waste than other actions. Seventy-eight percent of the cafeteria managers cited a nonfood reason—students’ attention on recess, free time, or socializing—when asked why students at their school did not eat all of their school lunch. Figure 8 shows the percent of managers who identified each of the nine reasons listed in our survey as either a minor, moderate, or major reason for plate waste in their school. By school level, the percent of managers selecting a reason for plate waste varied for four of the reasons provided in our survey. (See fig. 9.) For example, elementary school managers were much more likely than middle or high school managers to report “amount served is too much for age or gender” as a reason for plate waste. By school location, the percent of cafeteria managers selecting a reason for plate waste varied for four of the reasons provided in our survey. (See fig. 10.) For example, managers at urban schools were more likely than those at suburban and rural schools to report that students “do not like that food” as a reason for plate waste. By schools serving different proportions of free and reduced-price lunches, cafeteria managers’ perceptions differed somewhat for three reasons. For example, managers in schools serving under 30 percent free and reduced-price lunches were more likely than managers in schools serving over 70 percent free and reduced-price lunches to cite “take more than they can eat” as a reason for plate waste. (See fig. 11.) In addition to asking cafeteria managers to respond to a list of possible reasons for plate waste, we asked them to identify the effect on plate waste of the NSLP’s requirements for types of food and serving sizes that were in effect at the time of our survey. The managers believed that, overall, the minimum federal serving sizes provided about the right amount of food for the students at their school. (See fig. 12.) Furthermore, for each of four minimum serving size requirements that were in effect at the time of our survey, most cafeteria managers reported that each requirement did not result in more plate waste at their school. However, two requirements—serving at least three-fourths of a cup of fruits/vegetables daily and serving at least eight servings of breads/grains weekly—were viewed as resulting in more plate waste by about one-third and one-quarter of the managers, respectively. Figure 13 shows the percent of cafeteria managers who reported that the minimum serving sizes for the four requirements resulted in more waste. In addition, we asked cafeteria managers about the potential effect on plate waste of increasing the minimum serving sizes for fruits/vegetables and breads/grains. For fruits/vegetables, 62 percent of the middle and high school managers said that increasing the amount from three-fourths of a cup to one cup daily would cause more waste. For breads/grains, 53 percent of the middle and high school managers said that increasing the number of weekly servings from 8 to 15 would increase plate waste; and 69 percent of the elementary school managers reported that increasing the number of servings of breads/grains from 8 to 12 weekly would cause more plate waste. Of 11 possible actions listed in the survey to reduce plate waste, cafeteria managers viewed actions involving students in the choice of food, such as letting students select only what they want and seeking students’ opinions regularly about menus, as more likely to reduce plate waste than other actions. (See fig. 14.) By school level, there was some variation in the views of cafeteria managers for two of the actions to reduce plate waste listed in our survey. (See fig. 15.) For example, elementary school managers were more likely than high school managers to identify “reduce federally required portion sizes” as an action that would cause a little or a lot less plate waste. By school location, there was some variation in the views of cafeteria managers for four of the actions listed in our survey. For example, managers in urban schools were more likely than managers in rural schools to cite “seek student opinions regularly about menus” as an action that would cause less plate waste. (See fig. 16.) By schools serving different proportions of free and reduced-price lunches, there was no variation in cafeteria managers’ views on ways to reduce plate waste. Managers in each group—schools serving under 30 percent free and reduced-price lunches, schools serving between 30 and 70 percent free and reduced-price lunches, and schools serving over 70 percent free and reduced-price lunches—had similar opinions about the general level of effectiveness for the 11 potential actions to reduce waste that were listed in the survey. In addition, most managers reported that two approaches already in place in most schools result in less plate waste. Eighty percent of the managers said that the OVS option results in less waste, and 55 percent said that offering more than one main dish or entree daily results in less waste. Most cafeteria managers reported satisfaction with various aspects of the federal commodities received at their school for use in school lunches. The managers’ level of satisfaction was highest for the taste and packaging of the commodities and lowest for the variety of foods available and the quantity of individual commodities. Figure 17 shows the percent of cafeteria managers who were satisfied, and the percent who were dissatisfied, with the federal commodities provided for school lunches. Over 70 percent of the managers reported that they wanted all or almost all of the different commodities received. However, about 10 percent reported that they would prefer not to receive about half or more of the different commodities they were sent. (See fig. 18.) We provided copies of a draft of this report to the Department’s Food and Consumer Service for its review and comment. We met with agency officials, including the Deputy Administrator, Special Nutrition Programs. Agency officials questioned why our survey results generalize to 80 percent, rather than 100 percent, of all the public schools that participated in the NSLP in the 1993-94 school year. Relatedly, agency officials asked if we had analyzed the characteristics of nonrespondents. We generalized our results to 80 percent of the public schools because we used a conservative statistical approach that required us to generalize our results only to the overall level reflected by our response rate, in this case 80 percent. We did not analyze the characteristics of nonrespondents because we believe that such an analysis alone would not allow us to generalize our survey results to 100 percent of the public schools that participated in the NSLP in the 1993-94 school year. To generalize to 100 percent of the public schools, we believe it would also be necessary to analyze information about perceptions of plate waste from a subsample of cafeteria managers who did not respond to our survey. This analysis would allow us to assess whether the opinions of these managers differed significantly from those of the managers who completed and returned a survey. Further, the Department commented that our survey’s list of possible reasons for plate waste did not permit cafeteria managers to select other possible reasons, including meal quality and palatability. We agree that these reasons may affect plate waste. However, we included two related reasons for plate waste—“they do not like that food” and “they do not like the way the food looks or tastes.” We believe these two reasons address, in part, meal quality and palatability. In addition, respondents had the opportunity to identify other reasons contributing to plate waste. Less than 5 percent of the respondents specified other reasons that they considered to be at least a minor reason for plate waste. The Department also commented that we did not solicit the views of children or their parents/caretakers. We agree that the views of cafeteria managers present only one perspective on the extent of, and reasons for, plate waste and that valuable information could be obtained from a comprehensive, nationwide study of the views of children and their parents/caretakers. The time and resources associated with such a study could be substantial. In addition, the Department commented that our study did not address whether there was more or less plate waste in the NSLP than in other lunch settings—such as at home or in restaurants. While identifying the amount of waste in different lunch settings was not an objective of our study, our survey asked cafeteria managers if they perceived the amount of waste from school lunches as more, less, or about the same as the amount of waste from lunches brought from home. Our survey results found that, of those cafeteria managers who were able to assess differences in the amount of plate waste, 79 percent believed that the amount from school lunches was less than or the same as the amount from lunches brought from home. Finally, agency officials provided some technical and clarifying comments that we incorporated into the report as appropriate. To develop the questions used in our survey of cafeteria managers, we reviewed the NSLP’s regulations and research addressing the issue of waste in the program. Furthermore, we spoke with representatives from school food authorities, the American School Food Service Association, and the Department’s Food and Consumer Service. We refined our questions by pretesting our survey with the cafeteria managers of 18 schools in Illinois, Pennsylvania, South Carolina, Texas, Virginia, West Virginia, and the District of Columbia. We mailed our survey to a random sample of 2,450 cafeteria managers in public schools in the 50 states and the District of Columbia. We selected our sample from the 87,100 schools listed in the National Center for Education Statistics’ Common Core of Data Public School Universe, 1993-94, the latest year for which a comprehensive list of public schools was available. This document did not identify whether a school participated in the NSLP. Eighty percent (1,967) of those surveyed returned a survey. Of these, about 4 percent (80) reported that their school did not participate in the NSLP, while the remainder (1,887) reported that their school participated in the program. Our survey results generalize to 65,743 of the 81,911 public schools nationwide that participated in the NSLP in the 1993-94 school year. This number may vary for individual questions, depending on the response rate to the question. As with all sample surveys, our results contain sampling error—potential error that arises from not collecting data from the cafeteria managers at all schools. Unless otherwise indicated in appendix I, the sampling error for the survey results presented in this report is plus or minus no more than 5 percentage points. Sampling error must be considered when interpreting differences between subgroups, such as urban and rural schools. All differences we report are statistically significant unless otherwise noted. Statistical significance means that the difference we observed between subgroups is too large to be attributed to chance. We conducted our review from July 1995 through June 1996 in accordance with generally accepted government auditing standards. We did not, however, independently verify the accuracy of the cafeteria managers’ responses to our survey. Appendix II contains a more detailed description of our survey methodology. Appendix III contains a copy of our survey and summarizes the responses. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days from the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees, interested Members of Congress, the Secretary of Agriculture, and other interested parties. We will also make copies available to others on request. If you have any questions, please call me at (202) 512-5138. Major contributors to this report are listed in appendix IV. Middle school cafeteria managers reporting students “do not like that food” as reason for plate waste at their school (fig. 9) High school cafeteria managers reporting students “do not like that food” as reason for plate waste at their school (fig. 9) Middle school cafeteria managers reporting students “take more than they can eat” as reason for plate waste at their school (fig. 9) High school cafeteria managers reporting students “take more than they can eat” as reason for plate waste at their school (fig. 9) Middle school cafeteria managers reporting that “amount served is too much for age or gender” as reason for plate waste at their school (fig. 9) High school cafeteria managers reporting “amount served is too much for age or gender” as reason for plate waste at their school (fig. 9) Urban school cafeteria managers reporting “not hungry” as reason for plate waste at their school (fig. 10) Suburban school cafeteria managers reporting “not hungry” as reason for plate waste at their school (fig. 10) Urban school cafeteria managers reporting “take more than they can eat” as reason for plate waste at their school (fig. 10) Cafeteria managers at schools serving over 70 percent free and reduced-price lunches reporting students “take more than they can eat” as reason for plate waste at their school (fig. 11) Middle school cafeteria managers reporting “reduce federally required portion sizes” as a way to reduce plate waste (fig. 15) High school cafeteria managers reporting “reduce federally required portion sizes” as a way to reduce plate waste (fig. 15) Middle school cafeteria managers reporting “replace federal commodities with cash” as a way to reduce plate waste (fig. 15) High school cafeteria managers reporting “replace federal commodities with cash” as a way to reduce plate waste (fig. 15) Urban school cafeteria managers reporting “replace federal commodities with cash” as a way to reduce plate waste (fig. 16) Suburban school cafeteria managers reporting “replace federal commodities with cash” as a way to reduce plate waste (fig. 16) The Chairman of the House Committee on Economic and Educational Opportunities asked us to study plate waste in the National School Lunch Program (NSLP). Specifically, we agreed to survey cafeteria managers in public schools nationwide that participate in the NSLP to obtain their perceptions on the (1) extent to which plate waste is a problem, (2) amount of plate waste by type of food, and (3) reasons for and ways to reduce plate waste. We agreed to determine whether the perceptions of managers differed by their school’s level (elementary, middle, or high school), their school’s location (urban, suburban, or rural), and the proportion of their school’s lunches served free and at a reduced price (under 30 percent free and reduced price, 30 to 70 percent free and reduced price, or over 70 percent free and reduced price). In addition, we agreed to ask cafeteria managers about their level of satisfaction with federal commodities used in the program. To develop the questions used in our survey of cafeteria managers, we reviewed the NSLP’s regulations and research addressing the issue of waste in the program. Furthermore, we spoke with representatives from school food authorities, the American School Food Service Association, and the U.S. Department of Agriculture’s Food and Consumer Service. We refined our questions by pretesting our survey with the cafeteria managers of 18 schools in Illinois, Pennsylvania, South Carolina, Texas, Virginia, West Virginia, and the District of Columbia. Generally, the questions on our survey concerned the 1995-96 school year. We mailed our survey to a random sample of 2,450 cafeteria managers in public schools in the 50 states and the District of Columbia. We selected our sample from the 87,100 schools listed in the National Center for Education Statistics’ Common Core of Data Public School Universe, 1993-94, the latest year for which a comprehensive list of public schools was available from the National Center for Education Statistics. This document did not identify whether a school participated in the NSLP. We sent as many as two followup mailings to each cafeteria manager to encourage response. Eighty percent (1,967) of those surveyed returned a survey. Of these, about 4 percent (80) reported that their school did not participate in the NSLP, while the remainder (1,887) reported that their school participated in the program. Our survey results generalize to 65,743 of the 81,911 public schools nationwide that participated in the NSLP in the 1993-94 school year. This number may be lower for individual questions, depending on the response rate for the question. The results of our survey of cafeteria managers cannot be generalized to schools that opened after school year 1993-94; to private schools; to most residential child care institutions; to schools in the U.S. territories; and to schools represented by the survey nonrespondents. We matched the 1,887 survey responses to information about each school in the Common Core of Data. We used the Common Core of Data to identify school location and to validate survey responses on student enrollment and school level. From this validation, we determined that a number of the surveys were completed for the surveyed school’s district rather than for the individual school. In those cases, we used information from the Common Core of Data to determine the surveyed school’s level (e.g., elementary) and student enrollment. We assumed that the school served the same proportion of free and reduced-price lunches as the district. Unless otherwise stated in the survey response, we also assumed that districtwide opinions about plate waste applied to the surveyed school. Table II.1 shows the number of cafeteria managers responding to our survey, by school level. Table II.2 shows the number of cafeteria managers responding, by school location. Table II.3 shows the number of cafeteria managers responding, by schools serving different proportions of free and reduced-price lunches. As with all sample surveys, our results contain sampling error—potential error that arises from not collecting data from cafeteria managers at all schools. We calculated the sampling error for each statistical estimate at the 95-percent confidence level. This means, for example, that if we repeatedly sampled schools from the same universe (i.e., Common Core of Data) and performed our analyses again, 95 percent of the samples would yield results within the ranges specified by our statistical estimates, plus or minus the sampling errors. In calculating the sampling errors, we used a conservative formula that did not correct for sampling from a finite population. The sampling error for most of the survey results presented in this report is plus or minus no more than 5 percentage points. Sampling error must be considered when interpreting differences between subgroups, such as urban and rural schools. For each comparison of subgroups that we report, we calculated the statistical significance of any observed differences. Statistical significance means that the difference we observed between two subgroups is larger than would be expected from the sampling error. When this occurs, some phenomenon other than chance is likely to have caused the difference. Statistical significance is absent when an observed difference between two subgroups, plus or minus the sampling error, results in an interval that contains zero. The absence of a statistically significant difference does not mean that a difference does not exist. The sample size or the number of respondents to a question may not have been sufficient to allow us to detect a difference. We used the chi square test of association to test the significance of differences in percentages between two subgroups and the t-test for differences in means. We conducted our review from July 1995 through June 1996 in accordance with generally accepted government auditing standards. We did not, however, independently verify the accuracy of the cafeteria managers’ responses to our survey. Thomas Slomba, Assistant Director Rosellen McCarthy, Project Leader Sonja Bensen Carolyn Boyce Jay Scott Carol Herrnstadt Shulman The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on food waste from school lunches provided to school children under the National School Lunch Program. GAO found that: (1) school cafeteria managers had varying perceptions about the degree to which food waste was a problem; (2) elementary school cafeteria managers were more likely to perceive food waste as a more serious problem; (3) the amount of food wasted varied by the type of food, with cooked vegetables being wasted more often; (4) many cafeteria managers believed that students' attention on recess or free time, rather than lunch, contributed to waste; (5) many cafeteria managers believed that allowing students to select what they wanted to eat would reduce waste; and (6) most cafeteria managers were satisfied with the federal commodities they received for use in the School Lunch Program, but about 10 percent reported that they would rather not receive at least half of the different types of commodities they received under the program. |
The federal Food Stamp Program is intended to help low-income individuals and families obtain a more nutritious diet by supplementing their income with benefits to purchase food. FNS pays the full cost of food stamp benefits and shares the states’ administrative costs—with FNS paying approximately 50 percent—and is responsible for promulgating program regulations and ensuring that state officials administer the program in compliance with program rules. The states usually administer the program out of local assistance offices that determine whether households meet the program’s eligibility requirements, calculate monthly benefits for qualified households, and issue benefits to participants, almost always on an Electronic Benefits Transfer (EBT) card. The local assistance offices often administer other benefit programs as well, including TANF, Medicaid, and child care assistance. In fiscal year 2004, the Food Stamp Program issued almost $25 billion in benefits, and in September 2004, almost 25 million individuals participated in the program. As shown in figure 1, the increase in the average monthly participation of food stamp recipients in 2004 continues a recent upward trend in the number of people receiving benefits, with caseloads increasing over 40 percent since 2001, but still below the level in 1996. Eligibility for participation in the Food Stamp Program is based on the Department of Health and Human Services’ poverty measures for households. The caseworker must first determine the household’s gross income, which cannot exceed 130 percent of the poverty level for that year (or about $1,654 per month for a family of three living in the contiguous United States in 2003). Then the caseworker must determine the household’s net income, which cannot exceed 100 percent of the poverty level (or about $1,272 per month for a family of three living in the contiguous United States). Net income is determined by deducting from gross income expenses such as dependent care costs, medical expenses, utilities costs, and shelter expenses. In addition, there is a limit of $2,000 in household assets, and basic program rules limit the value of vehicles an applicant can own and still be eligible for the program. If the household owns a vehicle worth more than $4,650, the excess value is included in calculating the household’s assets. After eligibility is established, households are certified to receive for food stamps for periods ranging from 1 to 24 months depending upon household circumstances. The average certification period is 10 months. Once the certification period ends, households must reapply for benefits, at which time eligibility and benefit levels are redetermined. Between certification periods, households must report changes in their circumstances—such as household composition, income, and certain expenses—that food stamp agencies must consider to determine whether the change affects their eligibility or benefit amounts. States have the option of requiring food stamp participants to report on their financial circumstances at various intervals and in various ways. States can institute a type of periodic reporting system, or they can rely on households to report changes in their household circumstances within 10 days of occurrence. Under periodic reporting, participants report monthly, quarterly, or under a simplified system. The simplified reporting system, available since early 2001, provides for an alternative reporting option that requires households with earned income to report changes between certifications only when their income rises above 130 percent of the poverty level. This easing of program requirements was designed to help increase the program access and participation of eligible working families, an FNS goal, by making it easier for them to participate, as well as to reduce the administrative burden on local food stamp offices. To ensure the accuracy of food stamp payments, FNS and the states have an extensive quality control system. In fiscal year 2003, the states spent an estimated $80 million to administer the system, and FNS spent and estimated $9 million. According to FNS officials, each month a state’s food stamp QC staff selects a representative sample of the open food stamp cases for review. The QC staff reviews each sample case to verify whether the recipient’s eligibility and benefit amount were determined correctly. If the reviewer finds the benefit amount off by more than $25, it is counted as an error. The statewide sample produces a valid statewide error rate, although in most cases, it does not include sufficient cases to generate error rates for local offices. FNS plays a significant role in monitoring and validating the state’s review. The FNS regional offices approve the states’ sampling plans; validate the states’ samples, totaling 56,557 in fiscal year 2003; and review one-third of these sample cases to ensure accuracy. They also handle informal arbitration of disputes resulting from differences between the state and FNS review outcomes. Disputes that are not resolved at the regional office can be appealed to FNS headquarters for formal arbitration. In fiscal year 2003, regional reviews found 151 cases where the regional offices’ finding or error amount was different from the states’ finding or error amount. According to FNS officials, this constitutes less than 1 percent of the cases reviewed by the regions, and each year between 20 and 30 of these unresolved disputes between the state and the regional office are appealed to FNS headquarters for formal arbitration. According to FNS officials, upon the completion of the regional office’s review and error disagreement processes, the regional office adjusts error rates to reflect the final results. Once the error rates are final, FNS is required to compare each state’s performance with the national error rate and imposes penalties or provides incentives according to specifications in law. Prior to fiscal year 2003, penalties were levied each year a state’s payment error rate was above the national average. In addition, states with error rates above 6 percent, other than for good cause, were required to develop corrective action plans that are monitored by the FNS regional offices. FNS can negotiate with the states the amount of the penalty that will be paid to FNS, the amount that will be reinvested into the program, and the amount of money that will be collected if the state does not improve its error rate to an agreed-upon amount. In order to encourage program improvement, FNS also provided enhanced funding to states that with a payment error rate less than or equal to 5.90 percent according to a formula set in law. During this period of time, the states were held accountable only for their error rate and no other performance measure. The Farm Security and Rural Investment Act of 2002 (the 2002 Farm Bill) made significant changes to the way penalties and incentives are calculated and awarded. States will not be penalized until their error rate exceeds the national error rate threshold for 2 years in a row. The error rate threshold changed so that states are not penalized unless there is a 95 percent statistical probability that their error rate exceeds 105 percent of the national average for 2 consecutive years. If a state’s error rate exceeds the threshold for 2 years in a row, a penalty will be established that is equal to 10 percent of the cost of errors above 6 percent. In addition to establishing the new penalty system, the 2002 Farm Bill instructed FNS to create new criteria for performance bonuses that award states with high or most improved performance for actions taken to correct errors, reduce error rates, improve eligibility determination, and other indicators of effective program operations. FNS and the states also conduct fraud prevention activities to detect and prosecute food stamp fraud by retailers and participants. In fiscal year 2002, the states spent $229 million on their fraud control activities and reported that they completed 834,000 client investigations resulting in 12,000 state prosecutions and 61,000 ineligibility rulings. As a result of these fraud control activities and following up on overpayments identified through the QC process and during regular case processing activities, the states established almost $26 million in fraud claims, $176 million in household error claims, and $59 million in agency error claims. States also reported they collected $209 million on previously established claims. FNS’s payment error statistics do not account for the states’ results in recovering overpayments. Payment errors can typically be traced to a lack of or a breakdown in internal controls, which are an integral component of an organization’s management. Internal control is not one event, but a series of actions and activities that occur throughout an organization on an ongoing basis. Therefore, to guide our review of FNS and state actions taken to reduce payment errors, we used the key components of internal control as our framework. These components include creating a work environment that promotes accountability and the reduction of payment error, analyzing program operations to identify areas that present the risk of payment error, making policy and program changes to address the identified risks, and monitoring the results and communicating the lessons learned to support further improvement. The national Food Stamp Program payment error rate combines overpayments and underpayments to participants, and has declined by about one-third in recent years from 9.86 percent in 1999 to a record low of 6.63 percent in 2003. In dollars, this means if the 1999 error rate was in effect in 2003, the program would have made payment errors totaling over $2.1 billion rather than the $1.4 billion it experienced. Most states have enjoyed a recent reduction in payment error, with error rates falling in 41 states and the District of Columbia. However, some states continue to struggle with relatively high payment error rates. In addition to measuring the accuracy of benefits paid, about 8 percent of the decisions to deny, suspend, or terminate benefits were also made in error. However, the amount of benefits these households would have received is unknown and is not part of a state’s payment error rate. The national food stamp payment error rate combines overpayments and underpayments made to benefit recipients in all states. Of the total $1.4 billion in payment error in fiscal year 2003, $1.1 billion, or about 76 percent, were overpayments, which represent a financial loss to the federal government. Overpayments occur when eligible persons are provided more than they are entitled to receive or when ineligible persons are provided benefits. Underpayments, which occur when eligible persons are paid less than they are entitled to receive, totaled $340 million, or about 24 percent of dollars paid in error, in fiscal year 2003. Underpayments represent unintentional financial savings to the federal government. Studies have reviewed the effects of payment errors on household income. An analysis of fiscal year 2003 QC data conducted by Mathematica Policy Research, Inc., for FNS found that typical overpaid eligible households received an average of $97 too much in monthly benefits and underpaid eligible households received an average of $78 too little in monthly benefits. As a result, overpaid households’ purchasing power, which includes household gross income and food stamp benefits, rose by 8 percentage points, from 94 percent of the federal poverty level to 102 percent of the federal poverty level. Underpaid households’ purchasing power decreased by 6 percentage points from 80 percent of the federal poverty level to 74 percent of the federal poverty level. More than 98 percent of households receiving food stamps were eligible for the program. Ineligible households receiving food stamp benefits saw their purchasing power rise from 118 percent of the federal poverty level to 132 percent of the federal poverty level. The national Food Stamp Program payment error rate has declined by about one-third over the last 5 years. The rate has declined each year, from 9.86 percent in 1999 to a record low of 6.63 percent in 2003, as shown in figure 2. If the 1999 error rate had been in effect in 2003, the program would have made payment errors totaling over $2.1 billion rather than the $1.4 billion it experienced. In addition, the state-reported error rates for fiscal year 2004 suggest that the overall error rate has continued to decline. These error rates have not yet been validated by FNS, which usually produces slight adjustments to these state-reported rates. Error rates fell in 41 states and the District of Columbia, and 18 states reduced their error rates by one-third or more, as shown in figure 3. See appendix II for more information on individual states’ error rates over time. Further, the 5 states that issue the most food stamp benefits reduced their error rates by an average of 36 percent during this period, as shown in table 1. The changes in these states have a large effect on the national error rate because of the way the rate is calculated. In addition to contributing to the downward trend in the payment error rate, an increasing number of states had error rates below 6 percent in 2003. However, payment error rates vary among states. For example, 21 states had error rates below 6 percent in 2003 (see fig. 4 for states’ error rate performance); this is an improvement from 1999, when 7 states had error rates below 6 percent. Despite the decrease in many states’ error rates over the past few years, some states continue to have high payment error rates. For example, 7 states had payment error rates of 10 percent or higher in 2003. These states are also making progress, however, and are expected to have reduced their error rates in 2004. In addition to monitoring the payment error rate, FNS estimates the rate at which eligible households are improperly denied benefits, which is called the negative error rate. According to a FNS QC official, this rate is not included in the national food stamp payment error rate because it counts the number of cases affected rather than the number of dollars given in error. In fiscal year 2003, FNS reported that about 8 percent of the decisions to deny, suspend, or terminate benefits were made in error. However, the amount of benefits these households would have received had this error not occurred is unknown. Almost two-thirds of the payment errors in the Food Stamp Program are caused by caseworkers, usually when they fail to act on new information or make mistakes when applying program rules, and one-third are caused by participants, when they unintentionally or intentionally do not report needed information or provide incomplete or incorrect information (see fig. 5). Program complexity and other factors, such as the lack of resources and staff turnover, can contribute to caseworker mistakes. Despite the decrease in error rate in recent years, these factors have remained the key causes of payment error over the last 5 years. Participant caused error (35%) Caseworker caused error (65%) Almost two-thirds of all payment errors are made by state food stamp caseworkers, according to our analysis of FNS QC data. Errors can occur when caseworkers have difficulty keeping up with reported changes in household circumstances, according to officials from all of the states we reviewed. Caseworkers are required to review reported changes and assess their effect on a household’s eligibility and benefit levels. In addition, caseworkers regularly receive information from data matches and other sources that should be assessed and verified, and the failure to do so is another important cause of error. In previous work, we have found that the risk of improper payments increases in programs with a significant volume of transactions. When caseworkers fail to keep up with changes, the errors usually are reflected as incorrect household income or deductible expenses, as shown in table 2. Food stamp officials in 8 of the 9 states told us that increasing caseloads have contributed to payment errors, making it more difficult for caseworkers to attend to all of the reported changes. In recent years, FNS and several states have made it a priority to reach out to likely eligible households that are not yet participating in the program, in addition to focusing on minimizing payment error. At the same time, the nation experienced an economic downturn, which contributed to an increase in the number of families who had a need for food assistance. As a result of these and other factors, nationally, the number of food stamp participants has increased by more than 30 percent since February of 2001. Moreover, as states across the country have faced fiscal challenges due to the overall slowdown in the economy, some responded by reducing their staff, offering early retirements, or imposing hiring freezes. This also has contributed to rising caseloads per worker. For example, food stamp officials in Michigan said state fiscal problems resulting in staff reductions, increased caseloads per worker, and competing demands on workers made it difficult for caseworkers to act on all reported changes because of high caseloads. Oregon state officials also attribute their difficulties with payment accuracy to a 40 percent increase in the number of food stamp cases in the state between 2001 and 2003 as well as state financial problems that led to staff cuts and a hiring freeze. FNS officials informed us that there is no central collection of comparable data on caseload per worker among states. Further, the recent outreach efforts included a focus on increasing participation among working families. State and local officials from 8 of the 9 states we interviewed said managing cases with earnings contributes to payment error in part because caseworkers may find it difficult to keep up with the frequent changes reported to them. For example, Michigan food stamp officials told us that they experienced an increase in overpayment errors because caseworkers were failing to act on the frequent wage and salary changes reported by working participants. The complexity of the eligibility criteria for the Food Stamp Program contributes to caseworker errors. In previous work, we found that the risk of improper payments increases in programs with complex criteria for computing eligibility and payments. Caseworkers may miscalculate a household’s eligibility and benefits, in part because of the program’s complex rules for determining eligible household members and for calculating the household’s financial status. Our analysis of QC data found that caseworker mistakes often involve incorrectly determining household income, followed by mistakes related to income deductions, and nonfinancial issues, such as determining household composition. Although the error rate has declined in recent years, these three types of mistakes have remained the major sources of error over the last 5 years. To determine household gross income, caseworkers must decide which types of income to include. Households may have income from a number of different sources, and rules require that some of this income be counted and some not. Further, the fluctuations in earnings for low-income working participants can increase the likelihood of error simply because they result in a higher volume of case reviews and adjustments. Payment errors also occur when caseworkers misapply one or more of six allowable deductions when determining net income. Caseworkers calculate and deduct expenses such as dependent care costs, medical expenses, utilities costs, and shelter expenses—each of which have their own set of eligibility criteria. For example, caseworkers can provide households an excess shelter expense deduction if their shelter expenses exceed 50 percent of monthly household income after applying other deductions. As part of that process, caseworkers must determine whether the household is entitled to a standard utility allowance. Other common caseworker errors involve nonfinancial factors, such as misapplying the program’s complex rules for determining the members of the household. Although individuals may be living in the same home, they may be treated as different households for eligibility and benefit purposes, depending on whether they customarily purchase food and prepare meals together. However, this is sometimes difficult to determine. Food stamp officials in Michigan told us that given the variety of household circumstances and arrangements caseworkers face, determining household composition can be confusing. For instance, officials said it can be difficult to determine how to treat a youth over age 22 who moves in and out of the parents’ home or households that contain multiple generations of family members. In addition, officials from 5 of the 9 states we contacted told us that having caseloads with legal noncitizens was a challenge to reducing payment error, in part because of the numerous policy changes in recent years that affect the eligibility of various segments of this population. Correctly determining food stamp eligibility and benefits can be complicated by differences between Food Stamp Program rules and the rules governing other assistance programs. Officials from 5 of the 9 states we interviewed told us that minimizing payment error is difficult for caseworkers when they are responsible for multiple programs, such as TANF, food stamps, and Medicaid, because the eligibility and reporting rules among the programs often differ. For example, local officials from Texas told us that because of the way the state chose to implement the simplified reporting option, caseworkers are held responsible for failing to act on a change when a birth is reported to the Medicaid program, even though participants are not required to report the change to the Food Stamp Program, according to a recently approved policy option. Oregon state and local officials also told us that it is challenging for caseworkers to attend to food stamp payment accuracy when they have to determine eligibility and recertify households for other assistance programs. Officials from all 9 of the states we interviewed stated that staff turnover contributes to incorrect application of program rules. Food stamp officials in Oregon said that half of the caseworkers in the Portland area have less than 1 year of work experience because of high staff turnover, which makes it difficult for the office to maintain a workforce trained in making accurate eligibility decisions. Officials also told us that lack of training can be a challenge in part because it is difficult for caseworkers to learn the complex program rules and policies. Similar factors also affect errors where benefits are improperly denied, suspended, or terminated, according to officials from states we interviewed. They cited caseworkers misapplying policies or miscalculating income. For example, Michigan food stamp officials told us that these errors sometimes occur when caseworkers temporarily suspend benefits because participants are not complying with certain rules but then do not review the case to complete it correctly. Mississippi officials told us that these errors can also occur when caseworkers misapply a policy or fail to add up wages correctly. About 35 percent of all payment errors occur because participants do not provide required, complete, or correct information to caseworkers, either unintentionally or deliberately (see table 3). Although applicants are required to provide a variety of personal information to the caseworker, failure to report income is the most common cause of participant food stamp errors. Program complexity may play a role in participants’ failure to report needed information because the participants may not understand the reporting requirements, according to officials from 2 states we interviewed. For example, California state food stamp officials told us they believe that some participants do not report information because they are unfamiliar with the reporting requirements or because of language barriers. In addition, when participants receive assistance from multiple programs, they may be confused about what to report to whom because the requirements differ among the programs, including those for Medicaid and TANF. When participants fail to report information, the result is usually an incorrect determination of household income. Further, participants may not report information to caseworkers because of the perceived burden associated with reporting changes. For example, a food stamp official in Wisconsin told us that because of the lack of staff at the call center, participants calling to report changes may wait on the line for up to 20 minutes, and as a result, some participants will hang up. Errors may also occur when the participant intentionally does not report needed information or unintentionally or intentionally provides the caseworker with false or incomplete information. Although the percentage of payment errors that involve participants intentionally withholding information is not known, food stamp workers from all of the states we interviewed refer cases for investigation when they suspect fraud. For example, Oregon food stamp officials explained that cases are referred for suspected fraud when a participant consistently reports no income yet seems to have the resources needed to live self-sufficiently. In 2003, about 5 percent of all payment errors were referred for fraud investigation. Data are not available, however, to determine what percentage of these error cases resulted in disqualifying participants because of fraud. Despite the recent decrease in error rates, the program continues to face these same causes of error over time. Over the last 5 years, caseworker failure to act on reported information, caseworker misapplying program policies and requirements, and participant failure to report key information have remained the three largest causes of error. Moreover, errors involving incorrect household income or deductions for expenses continue to be the most common types of errors over the same period. FNS and the states we reviewed have taken many approaches to increasing food stamp payment accuracy, most of which are parallel with internal control practices known to reduce improper payments. These include practices to improve accountability, perform risk assessments, implement changes based on such assessments, and monitor program performance. Often, several practices are tried simultaneously, making it difficult to determine which have been the most effective. Because payment errors can typically be traced to problems with internal controls, we used the key components of internal control as our framework to categorize the approaches taken to reduce payment errors. In doing so, we found that both FNS and the states we reviewed were employing many of the same practices recognized as being effective in reducing payment errors. Both FNS and states have taken steps to ensure that program officials recognize their responsibility for payment accuracy. FNS has long focused its attention on states’ accountability for error rates through its QC system by assessing penalties and providing financial incentives. The administration of the QC process and its system of performance bonuses and sanctions is credited or faulted by many as being the single largest motivator of program behavior, and most of the states in our review believe the QC system has helped increase payment accuracy. From fiscal year 1998 to fiscal year 2002, FNS has assessed $327 million in penalties. Of these penalties, FNS waived $93 million, approved $92 million for reinvestment into state food stamp programs, collected almost $24 million, and designated $118 million at risk for payment if the states did not improve their error rates to agreed-upon targets. During this same period, FNS awarded states almost $251 million of enhanced funding because of their low error rates. In fiscal year 2003, the first year under the 2002 Farm Bill changes to the QC system, 11 states were found to be in jeopardy of being penalized if their fiscal year 2004 error rates did not improve. This was a higher number than was originally expected by some analysts because the error rate had fallen much faster than in previous years, leaving more states above the new error rate threshold. Some states have expressed concern that they may improve their error rates and yet still be penalized because the national rate continues to drop around them. In addition, under its new performance bonus system, FNS awarded a total of $48 million to states, including $24 million to states with the lowest and most improved error rates and $6 million to states with the lowest and most improved negative error rate. In addition to using the tools available under its QC system, FNS’s leadership has actively communicated the importance of accountability. Establishing payment accuracy as a program priority is considered by many to be the most important strategy for achieving program improvement. Since the arrival of the current Undersecretary for Food, Nutrition, and Consumer Services in 2001, FNS has put increased pressure on states to reduce error rates. For example, the undersecretary and other FNS officials visited states with particularly high error rates to discuss payment accuracy. FNS also began to collect a higher percentage of penalties. From fiscal year 1992 to 2000, FNS collected about $800,000 in penalties. Since fiscal year 2000, FNS has collected more than $20 million in penalties. Officials from one advocacy group active in food stamp issues credits this official’s active role as one reason for the drop in the error rates in the larger states. The FNS regional administrators also visit high error rate states and emphasize payment accuracy as a major management priority at regional meetings of state commissioners. All the states we reviewed also reported taking steps to increase the awareness of, and the accountability for, errors in their programs. Often, this coincided with a change in state leadership and responded to accumulating program penalties, bad publicity, or both. For example, Michigan state officials said that after their new governor took office in 2003, error reduction became an issue for the governor and the legislature because the state had paid more than $5 million in penalties in 2003 and 2004. In response, the Food Stamp Program began producing weekly internal reports and issuing regular reports to the governor and the legislature. The state’s error rate has dropped from 14.1 percent in fiscal year 2002 to a state-reported error rate of 6.73 percent in fiscal year 2004. As a result of the state’s progress in reducing its error rate, the governor has publicly recognized the program’s efforts. Wisconsin’s turnaround began in 2002 when state officials, with the support of the governor, made it clear to local food stamp offices that double-digit error rates and the penalties that go along with them were no longer acceptable. Wisconsin had been assessed penalties totaling over $8 million for 2000, 2001, and 2002. The state’s error rate has dropped from 13.14 percent in fiscal year 2001 to a state-reported error rate of 6.57 percent in fiscal year 2004. Penalties totaling over $5 million for 1998, 1999, and 2000, also spurred New Jersey’s human services director to appoint a special assistant to focus on reducing the state’s error rate. The state’s error rate has dropped from 12.93 percent in fiscal year 1999 to a state-reported error rate of 2.62 percent in fiscal year 2004. In addition, states we reviewed understood the need to communicate the importance of payment accuracy to individuals working at all levels of the program. Of the states we studied, California, Michigan, New Jersey, and Oregon have begun to set error rate targets for their local offices and have supplemental quality assurance processes in place to produce local error rates or error rates for their largest offices. Oregon and Texas also include payment accuracy goals in the expectations for their managers and workers, making payment accuracy one of the bases for their evaluations. California, New York, and Wisconsin have shared the accountability for poor performance by passing on a portion of their state’s financial penalties to their largest counties. New Jersey, South Dakota, and Texas, on the other hand, have shared the enhanced funding they have received for good performance with their local food stamp offices. Both FNS and states have taken steps to analyze program operations to identify where risks exist. For example, through its QC system, FNS determined that working families receiving benefits were error prone because of frequent changes in their income and deductions. In addition, officials from our 9 review states said they analyze the QC data to identify the sources and causes of food stamp payment error in their states. New Jersey officials used the QC data to identify salaries and wages as the largest sources of error in their state. In most cases, however, the QC samples are not large enough to produce valid error rates or to identify specific problem areas for most counties or local offices. In order to be able to obtain this information, California, Michigan, New Jersey, New York, and Oregon have developed their own quality assurance systems to produce monthly error rates for their counties or local offices. For example, in January 2003, Oregon instituted a targeted case review process that requires officials in local offices to review between 35 and 100 cases per month to identify errors. State officials say the reviews provide better information to local-level officials on the causes and sources of payment error at their site so they can plan corrective action. Oregon’s payment error rate dropped from 13 percent in fiscal year 2003 to a state-reported error rate of 7.81 percent in fiscal year 2004. California, New York, Wisconsin, and Michigan targeted their largest and most error-prone offices for special risk assessments. In Wisconsin, for example, the state focused its approaches on Milwaukee because it is the largest metropolitan area in the state, accounting for 47 percent of the state food stamp caseload. Because it had the highest error rate, it had the most significant influence on the state’s error rate. The state brought in a contractor that conducted an assessment of payment accuracy and the service delivery model used in Milwaukee. The contractor recommended that Milwaukee adopt a number of policy, program, and case review changes. In response, Wisconsin and the city of Milwaukee conducted a one-time find-and-fix case sweep between March and September 2004. State and county case readers reviewed 14,000, or almost 25 percent of , their food stamp cases to identify and correct potential errors. The information gained from this exercise identified certain risks and error- prone cases that county officials have used to implement other changes. As a result, Milwaukee County officials said their error rate dropped from 12.2 percent in March 2004 to 7.7 percent in June 2004. Once the QC review process is completed, penalties are assessed by law to high error rate states and FNS works with the states to correct the problems. Staff from the FNS regional offices work with the states on the development and implementation of reinvestment and corrective action plans that address specific threats and risks identified in risk assessments. These plans can vary depending upon the state’s systems and characteristics. Examples of activities included in the plans include training to address errors identified from QC and quality assurance reviews, developing online training curricula, and correcting errors generated by automated systems. States have also adopted practices to prevent, minimize, and address payment accuracy problems in response to the sources of error identified in risk assessments. States chose their varied practices in response to their unique characteristics, resources, and risks. Automated system changes. Michigan implemented changes in its automated system to help deal with problems resulting from failure to collect complete case information, particularly household income, during the application and recertification processes. The state’s automated system now prompts workers to obtain complete income documentation for cases with earned income Specialized change units. In June 2002, Los Angeles established 30 specialized change units for its 30 district offices to address their failure to act on reported information, which was one of their largest sources of errors. FNS supports the adoption of change centers such as these based upon their reported outcomes in other states. Los Angeles County officials said the change unit workers now act upon reported case changes that previously had not been acted upon by caseworkers because of their large caseloads. Outreach to more stable food stamp population. New York has implemented a program to automatically certify eligible nonparticipating elderly Supplemental Security Income recipients for food stamps for 4 years. In addition to reaching an underserved population without adding undue administrative burden on the local offices, officials believe that increasing the participation of these recipients could help reduce the state’s error rate because this group is less error prone because of its stable income and circumstances. States also adopted various case review practices that would help them address a wide range of risks and problems. Supervisory review of cases. Several states have begun to require local supervisory reviews of cases to detect and correct errors caused by misapplication of food stamp policies or workers failing to act on reported information. Some states require that all cases be reviewed, while others target error-prone cases or a certain number of cases per worker. Targeted local office reviews. Some states have used contractors or have established their own teams to target high error rate offices for improvement. Michigan recently started using technical assistance teams to observe the local office’s processes and make recommendations for improvement. Error review panels. Some of our review states have also established panels to review errors discovered through the QC process. New Jersey established such a panel, consisting of system, policy and QC staff. This panel reviews all errors, challenges some that it believes have been inaccurately classified and develops corrective actions to address the root causes of the errors. The results of the reviews can then be communicated to all local offices. For example, as a result the panel’s finding that computing utility bill deductions was a source of payment errors, the state implemented a mandatory standard utility allowance policy to reduce this type of error. Many of the error reduction practices employed by the states in our review focused primarily on agency-caused rather than client-caused errors. Many state officials we spoke with believe that states should not be held accountable for participant-caused errors, such as failure to report information, because the state cannot control participants’ behavior. However, FNS officials believe that states can reduce participant-caused errors by better using computer matching of state data sources and other outside sources of data, improving interviewing techniques to collect all relevant information and identify discrepancies, and educating clients about their responsibilities. In addition to taking the above steps focused specifically on decreasing the error rate, FNS has made and advocated for a number of program and policy changes designed primarily to address other issues, such as program participation, which have also helped reduce payment errors. FNS believes that serving eligible low-income families, particularly working poor families, is imperative to the success of welfare reform and the nutritional well-being of eligible persons. However, because the income and deductions for working poor families tend to be volatile, these households are more error prone, and their participation could increase the error rates of states trying hardest to serve them and thus discourage states from reaching out to these families. In response, FNS raised the error tolerance level in fiscal year 2000 from $5 to $25 for monthly food stamp payments for all cases. This change exempted smaller errors that had been counted in the past. FNS estimated that this change would have reduced the nationwide error rate by 0.66 percentage points if it had been implemented in the previous fiscal year. In addition, FNS and Congress have made several options available to the states to simplify the application and reporting process. These simplification measures are designed, in part, to reduce the administrative burden on both caseworkers and participants and thus promote higher participation in the program. One option in particular reduces the frequency with which households with earned income must report changes. Prior to this simplified reporting option, participants were required to frequently report changes in their circumstances. Under the simplified reporting rule issued in November 2000, most households need only report changes between certification periods if their new household income exceeds 130 percent of the federal poverty level. This simplified reporting option can reduce a state’s error rate as well. Absent simplified reporting, certain unreported or undetected changes between certification periods would be considered an error. Minimizing the number of income changes that must be reported between certifications can help reduce errors associated with caseworker failure to act as well as participant failure to report changes, and income-related errors account for more than half of all payment errors. Essentially, this simplification option redefines the threshold for what is considered an error. This type of change can result in an increase in program benefits paid out, such as when participants experience an increase in income between certification periods that need not be reported until the next certification under the simplified requirements. In 2000, FNS estimated the additional cost to the program to be approximately $51 million in fiscal year 2004 affecting nearly 1.5 million households per month. By expanding this option in the 2002 Farm Bill beyond earned income households to any and all households that can be asked to report periodically, an FNS official said Congress had endorsed the idea of making the program more user friendly to working families. Since the 2000 estimate, program participation has grown significantly, but FNS has not completed a more recent estimate of the additional cost. Moreover, the possible savings and efficiencies gained in program administration have not been quantified. Most of our review states have adopted some form of simplified reporting to help them better serve working families, permit greater program participation, and address the errors associated with frequent change reporting. Nationwide, FNS reported that as of September 2004, 41 states and the Virgin Islands had adopted some form of simplified reporting. FNS has taken many actions to track the success of improvement initiatives and to provide the information needed to facilitate program improvement. FNS managers use data generated from the QC system as well as the results of their own monitoring activities to track the states’ performance over time. FNS regional offices annually review state agency operations to, among other things, confirm that problems in program operations are being identified, properly analyzed, and resolved. Where applicable, the regional office also monitors the states’ implementation of corrective action plans. FNS, in turn, requires states to perform management evaluations to monitor whether adequate corrective action plans are in place at local offices to address the causes of persistent errors and deficiencies. To monitor corrective actions identified through the management evaluations, FNS suggests that states review a sample of case records containing actions that are error prone. In addition, in November of 2003, FNS created a Payment Accuracy Branch at the national level to work with FNS regions to suggest policy and program changes and to monitor state performance. The branch facilitates a National Payment Accuracy Workgroup with representatives from each FNS regional office and headquarters who use QC data to review and categorize state performance into one of three tiers. FNS has recommended a specific level of increasing intervention and monitoring approaches for each tier as error rates increase, and the FNS regional offices report to headquarters on both state actions and regional interventions quarterly. FNS also provides and facilitates the exchange of information gleaned from monitoring by publishing a periodic guide to highlight the practices states are using to sponsoring national and regional conferences and best practices seminars; training state QC staff; providing state policy training and policy interpretation and guidance; supporting adoption of program simplification options. Once promising state practices have been identified, FNS also provides funding to state and local food stamp officials to promote knowledge sharing of good practices. Oregon officials said FNS provided state exchange funds for them to visit Kentucky, Indiana, and Arizona—three states that had effective systems for monitoring performance at the local management and worker level. FNS also provided state exchange funds for Oregon officials to meet several times with officials from Idaho and Alaska to discuss common problems they faced trying to reduce payment errors and to generate solutions. In fiscal year 2004, FNS provided $612,000 for states to conduct state exchange visits. Officials from most of our review states found this program to be particularly helpful to their efforts to improve program performance. States are also using information generated by the QC system to track the results of their policy and program changes over time and communicate timely operational information to local offices. Information gleaned from monitoring can help inform their ongoing risk assessments. States are also promoting knowledge sharing of promising practices. These practices include preparing reports detailing causes and sources of errors for the local offices and publishing and distributing monthly error rates for all local offices; transmitting the results of statewide error review panels on the source and causes of errors to local offices, along with suggested corrective actions; sponsoring statewide QC meetings and state best practices conferences for local offices to discuss error rate actions taken and common problems; and sponsoring local office participation in FNS regional conferences. Despite FNS and state mechanisms used to track the initiatives and share promising practices, there are no data available on which initiatives are most cost-effective. FNS’s primary focus has been on monitoring progress in reducing error rates, which can help ensure eligible households receive the correct benefits and maintain public support for the program. Even so, from fiscal year 2001 to 2004, the annual administrative cost per participant has fallen from $129 to $99 per participant while program participation has increased. It is possible that some states gained efficiencies from simplified reporting. However, FNS has not studied the cost-effectiveness of this or other measures and thus cannot share this type of information with the states. Every state we surveyed has put into place a combination of approaches to address the key components of internal control, and the practices states adopted under each approach varied among them. For example, in California, state and local officials employed a combination of practices under each internal control component over the last several years to bring about their improved error rate (see fig. 6). Because many states have adopted multiple error reduction practices, officials we spoke with said it is difficult to isolate the results of individual practices, particularly when other program and economic changes are occurring simultaneously. State officials point to their low or dropping error rates as evidence that, collectively, their new practices are having a positive impact. However, they have little data to determine which practices have been most successful or cost-effective. Despite the lack of data, state officials citied various practices that they believe have worked well in their state. For example, officials in Michigan and New York believe new automated processes are their most effective practices. Michigan food stamp officials cited targeted local office reviews as another effective strategy for error reduction. Mississippi food stamp officials believe their required supervisory review of cases has been the most effective practice. California, South Dakota, and Texas also cited supervisory reviews as one of their most effective practices. As a result of unique circumstances in each state, some practices that may prove effective in one state would not be effective or feasible in another. For example, New Jersey food stamp officials credit their 2001 implementation of the simplified reporting option for earned income cases with being the most significant reason for the decline in their error rates. However, officials in South Dakota continue to require monthly reporting because they have been able to keep up with the reported changes. They believe this requirement is primarily responsible for its error rate, which is the lowest in the nation. Monthly reporting requires participants to report, and caseworkers to act, on case changes once per month, rather than relying on participants to report key changes and workers to react to the reported change. Monthly reporting requires significantly more work for both the caseworkers and participants, and other states with larger caseloads have said they do not have adequate resources to sustain this more labor-intensive approach. The success of new practices, however, can be undermined if the changes do not receive adequate management attention or are not effectively implemented. For example, Los Angeles established 30 specialized change units. County officials said these units helped reduce one of their largest sources of errors, caseworkers’ failure to act. On the other hand, Milwaukee’s change units have not been as effective in reducing the error rate as officials hoped because they have not been able to staff the center appropriately, according to county officials. They designed their change units on a model implemented in Atlanta, Georgia. The Atlanta model calls for 10 staff per 10,000 calls, and Milwaukee has about 7 staff per 20,000 calls. As a result, clients wait on the phone for up to 20 minutes, and some hang up before their changes can be reported. Similarly, Wisconsin state and Milwaukee food stamp officials said their find-and-fix case sweep program conducted between March and September 2004 was a particularly effective practice for reducing payment errors. Milwaukee officials believe the case sweep was largely responsible for their error rate dropping from 12.2 percent in March 2004 to 7.7 percent in June 2004, and they expect to see long-term effects as a result of their workers learning from the errors identified using this practice. However, Michigan tried a similar program but did not have comparable results. State officials said using this method did not reduce their error rate because the state and counties did not have enough staff to conduct a sufficient number of reviews. Los Angeles County officials said they also tried and abandoned a similar approach in 2001 because they did not have sufficient staff to correct the errors that were identified. The Food Stamp Program has seen a significant decline in the national error rate to a record low in 2003. If the 1999 error rate was in effect in 2003, the program would have made payment errors totaling over $2.1 billion rather than the $1.4 billion it experienced. Despite the many challenges states identified, a number of them have significantly lowered their error rates even while caseloads have continued to rise. However, some states are having more difficulty lowering their rates, and improper food stamp payments continue to account for a large amount of money— $1.4 billion in 2003. It is not completely clear why some states have been more successful at lowering their error rates than others. Rather than implementing one specific strategy, the nine states we reviewed have each implemented a package of changes in response to the unique circumstances in the state. Even those states we selected because of consistently high error rates have implemented multiple strategies and expect to see error rate decreases this year. However, although it is difficult to determine which actions are most likely to succeed in particular circumstances, we found examples of strategies that did not succeed because they lacked adequate management attention or were not effectively implemented. Future similar error rate reductions may prove challenging. The three major causes of errors have remained the same over time and are closely linked to the complexity of program rules and reporting requirements. As long as eligibility requirements remain so detailed and complex, certain caseworker decisions will be at risk of error. Moreover, participant-caused errors, which constitute one-third of the overall national errors, are difficult to prevent and identify. Attention from top USDA management as well as continued support and assistance from FNS will likely continue to be important factors in further reductions. In addition, if error rates continue to decrease, this trend will continue to put pressure on states to improve because penalties are assessed using the state’s error rate as compared with the national average. However, given the size of the Food Stamp Program, the costs to administer it, and the current federal budget deficit, achieving program goals more cost-effectively may become more important. FNS and the states will continue to face a challenge in balancing the goals of payment accuracy, increasing program participation rates, and the need to contain program costs. We provided a draft of this report to the U.S. Department of Agriculture for review and comment. On April 7, 2005, we met with FNS officials to get their comments. The officials said they agreed with our findings and conclusions. FNS also provided us with technical comments, which we incorporated where appropriate. We are sending copies of this report to the Secretary of Agriculture, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7215 if you have any questions about this report. Other major contributors to this report are listed in appendix III. To determine the causes of food stamp payment errors for fiscal years 1999 through 2003, we analyzed the Food and Nutrition Service’s (FNS) quality control (QC) system data of active cases used in error rate calculations. State officials draw monthly samples of cases—which are at the household level—and review them to determine the extent to which the households received benefits to which they were entitled. The results of these reviews are included in FNS’s QC database, and weighted analyses of these data produce nationally representative results. We constructed a database for each year from 1999 through 2003 that contained a subset of the QC variables relevant to our analysis. For the 1999-2002 databases, we included the reason for error and type of error variables from the database we obtained directly from FNS and the review finding, amount of error, and weight variables from an FNS QC database maintained by Mathematica Policy Research, Inc., and made available to the public via Mathematica’s Web site. For the 2003 data, we only used the FNS QC database maintained by Mathematica and made available via its Web site because it contained all the variables we needed. In addition, for each data set, we created a new variable categorizing the numerous reasons for error in the agency-or-client (1) variable for the most significant error to reflect, on a very general level, whether the error was agency- or-client caused. Likewise, we created a variable categorizing the numerous types of error in the element (1) code variable as nonfinancial, resources, income, deductions, or other for the most significant error. We generated weighted frequencies for the reason, type, and review finding variables for active cases that were used in calculating the error rate. Sampling errors for these weighted tabulations were estimated using the methodology provided in Appendix E of Characteristics of Food Stamp Households: Fiscal Year 2003, FNS Report Number FSP-04-CHAR. We also created weighted average dollar amounts of error by case review finding (e.g., overissuance or underissuance) and weighted frequencies for the intersection of reason for error and type of error. To assess the reliability of the data we used, we worked with FNS staff to obtain and understand the QC data and relied on FNS and Mathematica documentation on the datasets, and FNS and Mathematica reports based on these data. We ensured that we reliably downloaded the Mathematica QC data from the Web and correctly read in FNS’s raw QC data that FNS provided to us by comparing the number of records in each database with the number of records reported in FNS and Mathematica documentation. In addition, to ensure the accuracy of the computer programs we used to create and process the data, a review was made by a second GAO analyst. Through our assessment of the reliability of these data, we found that some variability exists in how states interpret and code the reason for error variable (i.e., whether error was client- or agency-caused). FNS stated that no quantitative analysis of the differences across states has been made. In 2003, FNS implemented guidelines to ensure greater consistency in state interpretations of the reasons for error (i.e., whether the reason for error was client- or agency-caused). Prior to 2003, interstate variation is believed to be greater than intrastate variation in these interpretations. Consistency in the error amount is expected to be a lesser problem since it is based on an established formula. We also reviewed reports including previous GAO efforts that studied QC processes and statistical properties. On the basis of the collective information and findings of our reliability assessment, we determined the data are sufficiently reliable for our analysis of the causes of food stamp payment errors. Cathy Roark and Luana Espana also made significant contributions to this report. In addition, Carl Barden, Evan Gilman, and Kevin Jackson produced our estimates of the causes of payment error, and Corinna Nicolaou assisted in the message and report development. Food Stamp Program: Farm Bill Options Ease Administrative Burden, but Opportunities Exist to Streamline Participant Reporting Rules among Programs. GAO-04-916. Washington, D.C.: September 16, 2004. Food Stamp Program: Steps Have Been Taken to Increase Participation of Working Families, but Better Tracking of Efforts Is Needed. GAO-04-346. Washington, D.C.: March 5, 2004. Welfare Reform: Information on Changing Labor Market and State Fiscal Conditions. GAO-03-977. Washington, D.C.: July 15, 2003. Food Stamp Employment and Training Program: Better Data Needed to Understand Who Is Served and What the Program Achieves. GAO-03-388. Washington, D.C.: March 12, 2003. Financial Management: Coordinated Approach Needed to Address the Government’s Improper Payments Problems. GAO-02-749. Washington, D.C.: August 9, 2002. Food Stamp Program: States’ Use of Options and Waivers to Improve Program Administration and Promote Access. GAO-02-409. Washington, D.C.: February 22, 2002. Means-Tested Programs: Determining Financial Eligibility Is Cumbersome and Can Be Simplified. GAO-02-58. Washington, D.C.: November 2, 2001. Executive Guide: Strategies to Manage Improper Payments: Learning From Public and Private Sector Organizations. GAO-02-69G. Washington, D.C.: October 2001 Food Stamp Program: States Seek to Reduce Payment Errors and Program Complexity. GAO-01-272. Washington D.C.: January 19, 2001. Internal Control: Standards for Internal Control in the Federal Government. GAO/AIMD-00-21.3.1. Washington, D.C.: November 1999. Food Stamp Program: States Face Reduced Federal Reimbursements for Administrative Costs. GAO/RCED/AIMD-99-231. Washington D.C.: July 23, 1999. | In fiscal year 2003, the federal Food Stamp Program made payment errors totaling about $1.4 billion in benefits, or about 7 percent of the total $21.4 billion in benefits provided to a monthly average of 21 million low-income participants. Because payment errors are a misuse of public funds and can undermine public support of the program, it is important that the government minimize them. Because of concerns about ensuring payment accuracy GAO examined: (1) what is included in the national food stamp payment error rate and how it has changed over time, (2) what is known about the causes of food stamp payment errors, and (3) what actions the Food and Nutrition Service (FNS) and states have taken to reduce these payment errors. To answer these questions, GAO analyzed program quality control data for fiscal years 1999 through 2003 and interviewed program stakeholders, including state and local officials from nine states. The national dollar payment error rate for the Food Stamp Program, which combines states' overpayments and underpayments to program participants in all states, has declined by almost one-third over the last 5 years to a record low of 6.63 percent. This decline has been widespread; the rate fell in 41 states and the District of Columbia, and rates in 18 of these states fell by at least one-third. However, despite this decrease, some states continue to have relatively high payment error rates. For example, in 2003, 7 states had payment error rates of more than 10 percent. Almost two-thirds of food stamp payment errors are caused by caseworkers, usually when they fail to keep up with reported changes or make mistakes applying program rules, and one-third are caused by participant failure to report required, complete, or correct information, such as household income and composition. State officials said program complexity and other factors, such as the lack of resources and staff turnover, can contribute to these errors. In fiscal year 2003, states referred about 5 percent of all cases identified with errors for suspected participant fraud investigation. To increase food stamp payment accuracy, FNS and the 9 states GAO reviewed took many approaches that parallel good internal control practices. These efforts include increasing the leadership and accountability in the program, performing risk assessments to identify problem areas, implementing various program and process changes in response to the findings from risk assessments, and monitoring and promoting improved performance. The states are using a combination of approaches to improve payment accuracy, making it difficult to tie error rate improvements to specific practices. However, state officials point to their improved state error rates as evidence of a collective impact. |
Information security is a critical consideration for any organization that depends on information systems and computer networks to carry out its mission or business and is especially important for government agencies, where maintaining the public’s trust is essential. While the dramatic expansion in computer interconnectivity and the rapid increase in the use of the Internet have enabled agencies such as SEC to better accomplish their missions and provide information to the public, agencies’ reliance on this technology also exposes federal networks and systems and the information stored on them to various threats. Cyber threats can be unintentional or intentional. Unintentional or nonadversarial threat sources include failures in equipment, environmental controls, or software due to aging, resource depletion, or other circumstances that exceed expected operating parameters. They also include natural disasters and failures of critical infrastructure on which the organization depends but are outside of the control of the organization. Intentional or adversarial threats sources include threats originating from foreign nation states, criminals, hackers, and disgruntled employees. Concerns about these threats are well-founded because of the dramatic increase in reports of security incidents, the ease of obtaining and using hacking tools, and advances in the sophistication and effectiveness of cyberattack technology, among other reasons. Without proper safeguards, systems are vulnerable to individuals and groups with malicious intent who can intrude and use their access to obtain or manipulate sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. We and federal inspectors general have reported on persistent information security deficiencies that place federal agencies at risk of disruption, fraud, or inappropriate disclosure of sensitive information. Accordingly, since 1997, we have designated federal information security as a government-wide high-risk area. This was expanded to include the protection of critical cyber infrastructure in 2003 and protecting the privacy of personally identifiable information in 2015 The Federal Information Security Modernization Act (FISMA) of 2014 is intended to provide a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. FISMA requires each agency to develop, document, and implement an agency-wide security program. The program is to provide security for the information and systems that support the operations and assets of the agency, including information and information systems provided or managed by another agency, contractor, or other source. Additionally, FISMA assigns responsibility to the National Institute of Standards and Technology (NIST) to provide standards and guidelines to agencies on information security. Accordingly, NIST has issued related standards and guidelines, including Recommended Security Controls for Federal Information Systems and Organizations, NIST Special Publication (NIST SP) 800-53, and Contingency Planning Guide for Federal Information Systems, NIST SP 800-34. To support its financial operations and store the sensitive information it collects, SEC relies extensively on computerized systems interconnected by local- and wide-area networks. For example, to process and track financial transactions, such as filing fees paid by corporations or disgorgements and penalties paid from enforcement activities, and for financial reporting, SEC relies on numerous enterprise applications, including: Delphi-Prism is the financial accounting and reporting system operated by the Federal Aviation Administration’s Enterprise Service Center (ESC). SEC uses various modules of this system for financial accounting, analyses, and reporting. Delphi-Prism also produces the SEC financial statements. Electronic Data Gathering, Analysis, and Retrieval (EDGAR) system which performs the automated collection, validation, indexing, acceptance, and forwarding of submissions by companies and others that are required to file certain information with SEC. Its purpose is to accelerate the receipt, acceptance, dissemination, and analysis of time-sensitive corporate information filed with the commission. EDGAR/Fee Momentum, a subsystem of EDGAR, which maintains accounting information pertaining to fees received from registrants. FedInvest, which invests funds related to disgorgements and penalties. Federal Personnel and Payroll System/Quicktime (FPPS/Quicktime), which processes personnel and payroll transactions. General Support System (GSS), which provides (1) business application services to internal and external customers and (2) security services necessary to support these applications. SEC’s GSS is a combination of infrastructure that includes the Windows-based local area network that authorizes SEC employees and contractors to use the underlying network environment, and various perimeter security devices such as routers, firewalls, and switches. Under FISMA, the SEC Chairman has responsibility for, among other things, (1) providing information security protections commensurate with the risk and magnitude of harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of the agency’s information systems and information; (2) ensuring that senior agency officials provide security for the information and systems that support the operations and assets under their control; and (3) delegating to the agency chief information officer (CIO) the authority to ensure compliance with the requirements imposed on the agency. FISMA also requires the CIO to designate a senior agency information security officer to carry out the information security-related responsibilities. During GAO’s fiscal year 2016 audit, SEC had demonstrated considerable progress in improving information security by implementing 47 of the 58 recommendations we had made in prior audits that had not been implemented by the conclusion of the fiscal year 2015 audit. Nevertheless, although SEC submitted evidence of taking action to resolve all 58 previously reported recommendations, its actions were not sufficient to fully resolve 11 recommendations. In addition, 15 deficiencies identified during the fiscal year 2016 audit limited the effectiveness of SEC’s controls for protecting the confidentiality, integrity, and availability of its information systems. For example, the commission did not consistently control logical access to its financial and general support systems. It also used unsupported software to process financial data. Further, while SEC generally implemented separation of duties, it allowed incompatible duties for one person. These deficiencies existed, in part, because the commission did not fully implement key elements of its information security program. The newly identified deficiencies resulted in 2 recommendations to SEC to more fully implement aspects of its information security program and 13 recommendations to enhance access controls and other security controls over its financial systems. Table 1 summarizes SEC’s progress toward addressing the prior and newly identified information security recommendations. Cumulatively, the deficiencies decreased assurance about the reliability of the data processed by key SEC financial systems. While not individually or collectively constituting a material weakness or significant deficiency, these deficiencies warrant SEC management’s attention. Until SEC mitigates these deficiencies, its financial and support systems and the information they contain will continue to be at unnecessary risk of compromise. SEC resolved 47 of the 58 previously reported information system control deficiencies in the areas of security management, access controls, configuration management, and separation of duties. For example, the commission offered physical security awareness training to its employees; enforced password expiration on the key financial application server; set access permission for sensitive files; and operated a fully functioning contingency operations site that would be used in the event of a disaster. Nevertheless, SEC had not fully mitigated 11 of the 58 previously reported deficiencies affecting its financial and general support systems. For example, SEC had not maintained and monitored firewall configuration baseline rules for its firewalls and it had not documented a comprehensive physical inventory of the systems and applications in the production environment. As of September 2016, SEC was still at risk because it did not have baselines needed to define and monitor changes to its systems, applications, and inventory. A basic management objective for any organization is to protect the resources that support its critical operations and assets from unauthorized access. Organizations accomplish this by designing and implementing controls that are intended to prevent, limit, and detect unauthorized access to computer resources (e.g., data, programs, equipment, and facilities), thereby protecting them from unauthorized disclosure, modification, and loss. Specific access controls include (1) boundary protection, (2) identification and authentication of users, (3) authorization restrictions, (4) cryptography, (5) audit and monitoring procedures, and (6) physical security. Without adequate access controls, unauthorized individuals, including intruders and former employees, can surreptitiously read and copy sensitive data and make undetected changes or deletions for malicious purposes or for personal gain. In addition, authorized users could intentionally or unintentionally modify or delete data or execute changes that are outside of their authority. Although SEC had issued policies and implemented controls based on those policies, it did not consistently: (1) protect its network boundaries from possible intrusions; (2) identify and authenticate users; (3) authorize access to resources; (4) audit and monitor actions taken on the commission’s systems and network; and (5) encrypt sensitive information while in transmission. Boundary protection controls provide logical connectivity into and out of networks as well as connectivity to and from network-connected devices. Implementing multiple layers of security to protect an information system’s internal and external boundaries provides defense in depth. By using a defense-in-depth strategy, entities can reduce the risk of a successful cyberattack. For example, multiple firewalls can be deployed to prevent both outsiders and trusted insiders from gaining unauthorized access to systems. At the host or device level, logical boundaries can be controlled through inbound and outbound filtering provided by access control lists (ACL) and host-based firewalls. At the system level, any connections to the Internet, or to other external and internal networks or information systems, should occur through controlled interfaces. To be effective, remote access controls should be properly implemented in accordance with authorizations that have been granted. For one key financial system, SEC consolidated all internal firewalls in order to better manage its boundary protection controls; however, it configured the ACLs on the host-based firewalls supporting the key financial system’s servers to allow excessive inbound and outbound traffic. As a result, SEC introduced a vulnerability that could allow unauthorized access to the system. Information systems need to be managed to effectively control user accounts and identify and authenticate users. Users and devices should be appropriately identified and authenticated through the implementation of adequate logical access controls. Users can be authenticated using mechanisms such as a password and user identification combination. SEC policy requires default passwords in operating systems, databases, and web servers to be changed upon installation. Also, the policy states that information system owners should review user accounts and associated access privileges policy to ensure appropriate access and that terminated or transferred employees do not retain improper information system access. However, SEC did not fully implement controls for identifying and authenticating users. For example, it did not always enforce individual accountability as 13 of 42 user accounts reviewed had the same default password in the three key financial systems’ servers that we reviewed. Also, SEC did not disable these 13 active user accounts although they had never been used. As a result, increased risk exists that the accounts could be compromised and used by unauthorized individuals to access sensitive financial data. Authorization encompasses access privileges granted to a user, program, or process. It involves allowing or preventing actions by that user based on predefined rules. Authorization includes the principles of legitimate use and “least privilege.” Access rights and privileges are used to implement security policies that determine what a user can do after being allowed into the system. Maintaining access rights, permissions, and privileges is one of the most important aspects of administering system security. SEC policy states that system owners shall explicitly authorize access to file permissions and privileges, including approving, authorizing, and documenting system account actions (create, modify, disable, remove) for the specified resources in which the users have primary responsibility as well as reviewing access authorizations and granting or denying access to SEC information and information systems. SEC policy also states that information systems must prevent nonprivileged users from executing privileged functions; including disabling, circumventing, or altering implemented security safeguards or countermeasures. However, SEC did not always adequately restrict access privileges to ensure that only authorized individuals were granted access to its systems. In addition, SEC did not consistently monitor the role-based access privileges assigned to user groups for an externally managed financial system. The Enterprise Service Center (ESC) assigned SEC users to user groups with access privileges in the ESC Prism application that were not always consistent with the privileges authorized by SEC policy or access request forms. For example, ESC assigned 16 of 24 ESC Prism users to groups that were not used by SEC. As a result, users had excessive levels of access that were not required to perform their jobs. This could lead insiders or attackers who penetrate SEC networks to inadvertently or deliberately modify financial data or other sensitive information. Cryptographic controls can be used to help protect the integrity and confidentiality of data and computer programs by rendering data unintelligible to unauthorized users and/or protecting the integrity of transmitted or stored data. NIST guidance states that the use of encryption by organizations can reduce the probability of unauthorized disclosure of information. NIST also recommends that organizations employ cryptographic mechanisms to prevent unauthorized disclosure of information stored on agency networks. However, SEC did not fully encrypt sensitive information stored on servers supporting a key financial system. Without proper encryption, increased risk exists that unauthorized users could identify and use the information to gain inappropriate access to system resources. Audit and monitoring involves the regular collection, review, and analysis of auditable events for indications of inappropriate or unusual activity, and the appropriate investigation and reporting of such activity. These controls can help security professionals routinely assess computer security, perform investigations during and after an attack, and recognize an ongoing attack. Audit and monitoring technologies include network and host-based intrusion detection systems, audit logging, security event correlation tools, and computer forensics. Using automated mechanisms can help integrate audit monitoring, analysis, and reporting into an overall process for investigating and responding to suspicious activities. SEC policy states that intrusion detection parameters should be explicitly set. However, SEC did not fully implement an intrusion detection capability for key financial systems. As a result, SEC may not be able to detect or investigate some unauthorized system activity. Configuration management controls provides reasonable assurance that systems are configured securely and operating as intended. As part of its configuration management efforts, SEC policy requires protection from malicious code, including detection and eradication. In addition, patch management, a component of configuration management, is an important element in mitigating the risks associated with known vulnerabilities. When a vulnerability is discovered, the vendor may release a patch to mitigate the risk. If a patch is not applied in a timely manner or if a vendor no longer supports the system and does not prepare a patch, an attacker can exploit a known vulnerability not yet mitigated, enabling unauthorized access to the system or enabling users to have access to greater privileges than authorized. SEC improved several configuration management controls for its financial information systems. For example, it conducted malicious code reviews and ensured only approved software changes were made. In addition, SEC enhanced its patch management process by scheduling and deploying patches for its two operating system platforms on its financial application servers. However, SEC also used software that was no longer supported by the software’s vendor. Specifically, the commission continued to use an outdated version of an operating system on its key financial systems although the operating system’s vendor stopped supporting this version of the software over a decade ago and no longer develops or releases patches for the software. As a result, increased risk exists that an attacker could exploit newly discovered vulnerabilities associated with the outdated operating system. To reduce the risk of error or fraud, duties and responsibilities for authorizing, processing, recording, and reviewing transactions should be separated to ensure that one individual does not control all critical stages of a process. Effective separation of duties starts with effective entity-wide policies and procedures that are implemented at the system and application levels. Often, separation of incompatible duties is achieved by dividing responsibilities among two or more organizational groups, which diminishes the likelihood that errors and wrongful acts will go undetected because the activities of one individual or group will serve as a check on the activities of the other. Inadequate separation of duties increases the risk that erroneous or fraudulent transactions could be processed, improper program changes implemented, and computer resources damaged or destroyed. SEC policy states that information system owners must separate duties of individuals as necessary to provide appropriate management and security oversight and define information system access authorizations to support the separation of duties. SEC was successful in employing separation of duties control, with one exception. Of the 217 ESC Prism users, the commission assigned one user to two roles that violated the separation of duties’ principle. Although the violation only involved one person, it was significant because of the importance of the roles involved. The user was assigned to both the “contracting officer’s security group” and the “requisitioner’s security group with requisition approval.” According to an SEC official, users assigned to the contracting officers security group have the access permissions to approve and obligate awards, and users assigned to the requisitioner’s security group can, with approval, commit funds. As a result of being in both security groups, this person had the ability to both approve and obligate awards and then commit funds. An information security program should establish a framework and continuous cycle of activity for assessing risk, developing and implementing effective security procedures, and monitoring the effectiveness of these procedures. An underlying reason for the information security control deficiencies in SEC’s financial systems was that, although the agency developed and documented an information security program, it did not fully implement aspects of the program. In particular, SEC did not always update system security plans or fully implement its continuous monitoring capability. In addition, SEC made significant progress resolving previous-reported deficiencies but several deficiencies remained partially unresolved. FISMA requires each federal agency to have policies and procedures that ensure compliance with minimally acceptable system configuration requirements, including subordinate plans for providing adequate information security for networks, facilities, and systems or groups of systems, as appropriate. Consistent with this requirement, SEC policy states that information system owners of the GSS and major applications should be responsible for developing, documenting, and maintaining an inventory of information system components that: accurately reflects the current system; includes all components within the authorization boundary of the system; and provides the level of granularity deemed necessary for tracking and reporting within the system. In addition, SEC policy requires that the system component inventory be reviewed and updated when components are installed or removed and when system security plans are updated. Further, SEC policy states that the system security plan should be updated throughout the system life cycle. However, SEC did not update its system security plans to reflect the current operational environment. For example, it did not update network diagrams and asset inventories in the system security plans for GSS and a key financial system. Each of the several iterations of network diagrams and supporting schedules SEC provided to us during the audit reflected incomplete or inaccurate representations of the operating environment. To illustrate, inconsistencies existed among the network diagrams, reports from SEC’s automated asset tracking tool, and results from the automated scanning of the environment. Additionally, several previously decommissioned components remained installed, powered on, and accessible on its network. The system security plans were not current because SEC personnel did not update the plans, asset inventory, or network diagrams during the current modernization of the key financial system’s environment. The modernization effort, along with other routine maintenance, had increased the frequency of hardware added to or removed from the environment. The commission did not remove assets from the inventory or update the network diagram until the hardware had been physically removed from the data center even though the hardware was not operational. Without up-to-date, complete, and accurate system inventories and network diagrams in the system security plans, SEC lacks the baseline configurations settings to adequately secure its systems. An important element of risk management is ensuring that policies and controls intended to reduce risk are effective on an ongoing basis. To do this effectively, top management should understand the agency’s security risks and actively support and monitor the effectiveness of its security policies. NIST guidance and SEC policy state that the agency should develop a continuous monitoring strategy. SEC policy requires implementation of a continuous monitoring program that is to include (1) establishment of system-dependent monthly automated scans, (2) ongoing security control assessments, and (3) correlation and analysis of security related information generated by assessments. SEC did not fully implement and continuously monitor its secure configurations. While it made improvements to address prior-year GAO recommendations by developing and documenting approved secure configuration baselines based on NIST’s National Checklist Program, SEC had not fully implemented those secure configurations across the infrastructure present in the GSS and key financial systems. Further, although the commission employed a technology to facilitate automated configuration compliance scanning throughout the GSS and the key financial systems, it determined this technology to be too inefficient and cumbersome to facilitate automated scanning of technical configuration compliance and, during the fiscal year 2016 audit, was in the process of replacing it with a new capability. Thus, it did not consistently perform compliance scanning on multiple operating systems, databases, and network devices. However, such scanning is important for identifying vulnerabilities existing in a network. Our scans of SEC IT resources identified vulnerabilities affecting operating systems, databases, and network devices. Although additional analysis and coordination by responsible SEC organizations may have determined that some of the potential vulnerabilities may have been mitigated by compensating controls or other factors, the lack of processes noted above increase the risk that known vulnerabilities or misconfigurations will not be identified and remediated in a timely manner. Without implementing an effective process for monitoring, evaluating, and remedying identified deficiencies, SEC would not be aware of potential deficiencies that could affect the integrity and availability of its information systems. Information security control deficiencies in the SEC computing environment may jeopardize the confidentiality, integrity, and availability of information residing in and processed by its systems. Specifically, SEC configured its internal firewalls to allow too many internal users without legitimate business needs to access a key financial system environment. SEC also did not enable host based firewalls on all key financial system and a major operating system server, which made them vulnerable to unauthorized changes. In addition, SEC operated a financial system server with an unsupported operating system, risking exposure of financial data. Further, deficiencies exist in part because SEC did not maintain up-to- date network diagrams and asset inventories in the system security plans for GSS and a key financial system to accurately and completely reflect the current operating environment, and it also did not fully implement and continuously monitor GSS and the key financial system’s secure configurations. Cumulatively, these deficiencies decreased assurance regarding the reliability of the data processed by key financial systems. Until SEC mitigates its control deficiencies, its financial and support systems and the information they contain will continue to be at unnecessary risk of compromise. We recommend that Chairman of the SEC take two actions to more effectively manage its information security program: Maintain up-to-date network diagrams and asset inventories in the system security plans for GSS and a key financial system to accurately and completely reflect the current operating environment. Perform continuous monitoring using automated configuration and vulnerability scanning on the operating systems, databases, and network devices. To address specific deficiencies in information security controls, we made 13 detailed recommendations in a separate limited official use only report. Those recommendations address access control, configuration management, and separation of duties. We received written comments on a draft of this report from SEC. In its comments, which are reprinted in appendix II, the commission concurred with the two recommendations addressing its information security program. If effectively implemented, these actions should enhance the effectiveness of SEC’s controls over its financial systems. In addition, SEC’s Chief Information Security Officer provided technical comments on the draft report via e-mail, which we considered and incorporated, as appropriate. We acknowledge and appreciate the cooperation and assistance provided by SEC management and staff during our audit. If you have any questions about this report or need assistance in addressing these issues, please contact Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov or Nabajyoti Barkakati at (202) 512-4499 or barkakatin@gao.gov. GAO staff who made significant contributions to this report are listed in appendix III. Pursuant to statutory authority, GAO assesses the effectiveness of the Securities and Exchange Commission’s (SEC) internal control structure and procedures for financial reporting. Our objective was to determine the effectiveness of SEC’s information security controls for ensuring the confidentiality, integrity, and availability of its key financial systems and information. To assess information systems controls, we identified and reviewed SEC information systems control policies and procedures, conducted tests of controls, and held interviews with key security representatives and management officials concerning whether information security controls were in place, adequately designed, and operating effectively. This work was performed to support our opinion on SEC’s internal control over financial reporting as of September 30, 2016. We concentrated our evaluation primarily on the controls for systems and applications associated with financial processing. These systems were the (1) Delphi-Prism; (2) Electronic Data Gathering, Analysis, and Retrieval (EDGAR); (3) EDGAR/Fee Momentum; (4) FedInvest; (5) Federal Personnel and Payroll System/Quicktime and (6) general support systems. Our selection of the systems to evaluate was based on consideration of financial systems and service providers integral to SEC’s financial statements. We evaluated controls based on our Federal Information System Controls Audit Manual (FISCAM), which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized information; National Institute of Standards and Technology standards and special publications; and SEC’s plans, policies, and standards. We assessed the effectiveness of both general and application controls by performing information system controls walkthroughs surrounding the initiation, authorization, processing, recording, and reporting of financial data (via interviews, inquiries, observations, and inspections); reviewing SEC policies and procedures; observing technical controls implemented on selected systems; testing specific controls; and scanning and manually assessing SEC systems and applications, including EDGAR/Fee Momentum, and related general support system network devices, and servers. We also evaluated the Statement on Standards for Attestation Engagements report and performed testing on key information technology controls on the following applications and systems: Delphi- Prism, FedInvest, and Federal Personnel and Payroll System. To determine the status of SEC’s actions to correct or mitigate previously reported information security deficiencies, we identified and reviewed its information security policies, procedures, practices, and guidance. We reviewed prior GAO reports to identify previously reported deficiencies and examined the commission’s corrective action plans to determine which deficiencies it had reported as corrected. For those instances where SEC reported that it had completed corrective actions, we assessed the effectiveness of those actions by reviewing appropriate documents, including SEC-documented corrective actions, and interviewing the appropriate staffs, including system administrators. To assess the reliability of the data we analyzed, such as information system control settings, specific control evaluations for each accounting cycle, and security policies and procedures, we corroborated them by interviewing SEC officials, including programmatic personnel, and system administrators to determine whether the data obtained were consistent with system configurations in place at the time of our review. In addition, we observed configuration of these settings in the network. Based on this assessment, we determined the data were reliable for the purposes of this report. We performed this work in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provided a reasonable basis for our findings and conclusions based on our audit objective. In addition to the contacts named above, GAO staff who made major contributions to this report are Michael Gilmore and Duc Ngo (Assistant Directors); Angela Bell; Monica Perez-Nelson; Priscilla Smith; Henry Sutanto (Analyst-in-Charge) and Adam Vodraska. | SEC enforces securities laws, issues rules and regulations that provide protection for investors, and helps to ensure that securities markets are fair and honest. SEC uses computerized information systems to collect, process, and store sensitive information, including financial data. Having effective information security controls in place is essential to protecting these systems and the information they contain. Pursuant to statutory authority, GAO assesses the effectiveness of SEC's internal control structure and procedures for financial reporting. As part of its audit of SEC's fiscal years 2016 and 2015 financial statements, GAO assessed whether controls were effective in protecting the confidentiality, integrity, and availability of key financial systems and information. To do this, GAO examined SEC's information security policies and procedures, tested controls, and interviewed key officials on whether controls were in place, adequately designed, and operating effectively. The Securities and Exchange Commission (SEC) improved the security controls over its key financial systems and information. In particular, as of September 2016, the commission had resolved 47 of the 58 recommendations we had previously made that had not been implemented by the conclusion of the FY 2015 audit. However, SEC had not fully implemented 11 recommendations that included consistently protecting its network boundaries from possible intrusions, identifying and authenticating users, authorizing access to resources, auditing and monitoring actions taken on its systems and network, or encrypting sensitive information while in transmission. In addition, 15 newly identified control deficiencies limited the effectiveness of SEC's controls for protecting the confidentiality, integrity, and availability of its information systems. For example, the commission did not consistently control logical access to its financial and general support systems. In addition, although the commission enhanced its configuration management controls, it used unsupported software to process financial data. Further, SEC did not adequately segregate incompatible duties for one of its personnel. These weaknesses existed, in part, because SEC did not fully implement key elements of its information security program. For example, SEC did not maintain up-to-date network diagrams and asset inventories in its system security plans for its general support system and its key financial system application to accurately and completely reflect the current operating environment. The commission also did not fully implement and continuously monitor those systems' security configurations. Twenty-six information security control recommendations related to 26 deficiencies found in SEC's financial and general support systems remained unresolved as of September 30, 2016. (See table.) Cumulatively, the deficiencies decreased assurance about the reliability of the data processed by key SEC financial systems. While not individually or collectively constituting a material weakness or significant deficiency, these deficiencies warrant SEC management's attention. Until SEC mitigates these deficiencies, its financial and support systems and the information they contain will continue to be at unnecessary risk of compromise. In addition to the 11 prior recommendations that have not been fully implemented, GAO recommends that SEC take 13 actions to address newly identified control deficiencies and 2 actions to more fully implement its information security program. In commenting on a draft of this report, SEC concurred with GAO's recommendations. |
The missions of DOE’s 23 laboratories have evolved over the last 55 years. Originally created to design and build atomic bombs under the Manhattan Project, these laboratories have since expanded to conduct research in many disciplines—from high-energy physics to advanced computing at facilities throughout the nation. DOE’s goal is to use the laboratories for developing clean energy sources and pollution-prevention technologies, for ensuring enhanced security through reductions in the nuclear threat, and for continuing leadership in the acquisition of scientific knowledge. The Department considers the laboratories a key to a growing economy fueled by technological innovations that increase U.S. industrial competitiveness and create new high-skill jobs for American workers. Missions have expanded in the laboratories for many reasons, including changes in the world’s political environment. Nine of DOE’s 23 laboratories are multiprogram national laboratories; they account for about 70 percent of the total laboratory budget and about 80 percent of all laboratory personnel. Three of these multiprogram national laboratories (Lawrence Livermore, Los Alamos, and Sandia) conduct the majority of DOE’s nuclear weapons defense activities. Facing reduced funding for nuclear weapons as a result of the Cold War’s end and the signing of the comprehensive nuclear test ban treaty, these three laboratories have substantially diversified to maintain their preeminent talent and facilities. The remaining laboratories in DOE’s system are program- and mission-dedicated facilities. (See app. I for a list of all DOE laboratories.) DOE owns the laboratories and contracts with universities and private-sector organizations for the management and operation of 19, while providing federal staff for the remaining 4. The Congress is taking a growing interest in how the national laboratories are being managed. Recently introduced legislation would restructure the missions of the laboratories or manage them in new ways. Some previously proposed organizational options include converting the laboratories that are working closely with the private sector into independent entities or transferring the responsibility for one or more laboratories to other federal agencies whose missions are closely aligned with those of particular DOE laboratories. We have reported to the Congress that DOE’s efforts to sharpen the focus and improve the management of its laboratories have been elusive and that the challenges facing the Department raise concerns about how effectively it can manage reform initiatives. Over the past several years, many government advisory groups have raised concerns about how DOE manages its national laboratory system. Major concerns centered on three issues: The laboratories’ missions are unfocused. DOE micromanages the laboratories. The laboratories are not operating as an integrated system. More recent advisory groups have reported similar weaknesses, prompting the Congress to take a close look at how the national laboratory system is meeting its objectives. We identified nearly 30 reports by a wide variety of advisory groups on various aspects of the national laboratories’ management and missions. (See app. II for a list of past reports.) Most of these reports have been prepared since the early 1980s. The reports include the following: In 1982, DOE’s Energy Research Advisory Board reported that the national laboratories duplicate private-sector research and that while DOE could take better advantage of the national laboratories’ capabilities, it needed to address its own management and organizational inefficiencies, which hamper the achievement of a more effective laboratory system.In 1983, a White House Science Council Panel found that while DOE’s laboratories had well-defined missions for part of their work, most activities were fragmented and unrelated to the laboratories’ main responsibilities.In 1992, DOE’s Secretary of Energy Advisory Board found that the laboratories’ broad missions, coupled with rapidly changing world events, had “caused a loss of coherence and focus at the laboratories, thereby reducing their overall effectiveness in responding to their traditional missions as well as new national initiatives. . . .” A 1993 report by an internal DOE task force reported that missions “must be updated to support DOE’s new directions and to respond to new national imperatives. . . .” The most recent extensive review of DOE’s national laboratories was performed by a task force chaired by Robert Galvin, former Chairman of the Motorola Corporation. Consisting of distinguished leaders from government, academia, and industry, the Galvin Task Force was established to examine alternatives for directing the laboratories’ scientific and engineering resources to meet the economic, environmental, defense, scientific, and energy needs of the nation. Its 1995 report identified many of the problems noted in earlier studies and called for a more disciplined focus for the national laboratories, also reporting that the laboratories may be oversized for their role. The Galvin Task Force reported that the traditional government ownership and contractor operation of the laboratories has not worked well. According to its report, increasing DOE’s administration and oversight transformed the laboratories from traditional contractor-operated systems into a virtual government-operated system. The report noted that many past studies of DOE’s laboratories had resulted in efforts to fine-tune the system but led to little fundamental improvement. Regarding the management structure of DOE’s non-weapons-oriented laboratories, the task force recommended a major change in the organization and governance of the laboratory system. The task force envisioned a not-for-profit corporation governed by a board of trustees, consisting primarily of distinguished scientists and engineers and experienced senior executives from U.S. industry. Such a change in governance, the task force reported, would improve the standards and quality of work and at the same time generate over 20 percent in cost savings. Other findings by the task force and subsequent reports by other advisory groups have focused on the need for DOE to integrate R&D programs across the Department and among the laboratories to increase management efficiencies, reduce administrative burdens, and better define the laboratories’ missions. In June 1995, DOE’s Task Force on Strategic Energy Research and Development, chaired by energy analyst Daniel Yergin, issued a report on DOE’s energy R&D programs. The report assessed the rationale for the federal government’s support of energy R&D, reviewed the priorities and management of the overall program, and recommended ways of making it more efficient and effective. The task force recommended that DOE streamline its R&D management, develop a strategic plan for energy R&D, eliminate duplicative laboratory programs and research projects, and reorganize and consolidate dispersed R&D programs at DOE laboratories. In August 1995, the National Science and Technology Council examined laboratories in DOE, the Department of Defense (DOD), and the National Aeronautics and Space Administration (NASA). The Council reported that DOE’s existing system of laboratory governance needs fundamental repair, stating that DOE’s laboratory system is bigger and more expensive than is needed to meet essential missions in energy, the environment, national security, and fundamental science. The Council recommended that DOE develop ways to eliminate apparent overlap and unnecessary redundancy between its laboratory system and DOD’s and NASA’s. DOE’s Laboratory Operations Board was created in 1995 to focus the laboratories’ missions and reduce DOE’s micromanagement. Members serving on the Board from outside DOE have issued four different reports, which have noted the need to focus and define the laboratories’ missions in relation to the Department’s missions, integrate the laboratories’ programmatic work, and streamline operations, including the elimination or reduction of administrative burdens. In March 1997, the Office of Science and Technology Policy reported on laboratories managed by DOE, DOD, and NASA. The Office cited efforts by the three agencies to improve their laboratory management but found that DOE was still micro-managing its laboratories and had made little progress toward reducing the administrative burdens it imposes on its laboratories. The Office recommended a variety of improvements in performance measures, incentives, and productivity and urged more streamlined management. In March 1997, a report by the Institute for Defense Analyses (IDA) found that DOE’s processes for managing environment, safety, and health activities were impeding effective management. According to IDA, DOE’s onerous review processes undermined accountability and prevented timely decisions from being made and implemented throughout the entire nuclear weapons complex, including the national laboratories. IDA specifically noted that DOE’s Defense Programs had confusing line and staff relationships, inadequately defined roles and responsibilities, and poorly integrated programs and functions. IDA concluded that DOE needed to strengthen its line accountability and reorganize its structure in several areas. At our request, DOE provided us with a listing of the actions it took in response to repeated calls for more focused laboratory missions and improved management. But while DOE has made progress—principally by reducing paperwork burdens on its laboratories—most of its actions are still in process or have unclear expectations and deadlines. Furthermore, the Department cannot demonstrate how its actions have resulted, or may result, in fundamental change. To analyze progress in laboratory management reform, we talked to DOE and laboratory officials and asked DOE to document the actions it has taken, is taking, or has planned to address the recommendations from several advisory groups. We used DOE’s responses, which are reprinted in appendix III, as a basis for discussions with laboratory and DOE officials and with 18 experts familiar with national laboratory issues. We asked these experts to examine DOE’s responses. Several of these experts had served on the Galvin Task Force and are currently serving on DOE’s Laboratory Operations Board (app. IV lists the experts we interviewed). The actions DOE said it is taking include creating various internal working groups; strengthening the Energy R&D Council to facilitate more effective planning, budgeting, management, and evaluation of the Department’s R&D programs and to improve the linkage between research and technology development; increasing the use of private-sector management practices; adopting performance-based contracting and continuous improvement concepts; improving the oversight of efforts to enhance productivity and reduce overhead costs at the laboratories; expanding the laboratories’ work for other federal agencies; evaluating the proper balance between laboratories and universities for basic research; improving science and technology partnerships with industry; reducing unnecessary oversight burdens on laboratories; developing the Strategic Laboratory Missions Plan in July 1996 that identified laboratory activities in mission areas; creating the Laboratory Operations Board, which includes DOE officials and experts from industry and academia, to provide guidance and direction to the laboratories; and developing “technology roadmaps,” a strategic planning technique to focus the laboratories’ roles. Most of the actions DOE reported to us are process oriented, incomplete, or only marginally related to past recommendations for change. For example, creating new task forces and strengthening old ones may be good for defining problems, but these measures cannot force decisions or effect change. DOE’s major effort to give more focus to laboratory missions was a Strategic Laboratory Missions Plan, published in July 1996. The plan describes the laboratories’ capabilities in the context of DOE’s missions and, according to the plan, will form the basis for defining the laboratories’ missions in the future. However, the plan is essentially a descriptive document that does not direct change. Nor does the plan tie DOE’s or the laboratories’ missions to the annual budget process. When we asked laboratory officials about strategic planning, most discussed their own planning capabilities, and some laboratories provided us with their own self-generated strategic planning documents. None of the officials at the six laboratories we visited mentioned DOE’s Strategic Laboratory Missions Plan as an essential document for their strategic planning. A second action that DOE officials reported as a major step toward focusing the laboratories’ missions is the introduction of its “technology roadmaps.” These are described by DOE as planning tools that define the missions, goals, and requirements of research on a program-by-program basis. Officials told us that the roadmaps are used to connect larger departmental goals and are a way to institutionalize strategic planning within the Department. Roadmaps, according to DOE, will be an important instrument for melding the laboratories into a stronger and more integrated national system. DOE reports that roadmaps have already been developed in some areas, including nuclear science, high-energy physics, and the fusion program. Experts we interviewed agreed that creating roadmaps can be a way to gain consensus between DOE and the laboratories on a common set of objectives while also developing a process for reaching those objectives. However, some experts also stated that it is too soon to tell if this initiative will succeed. One expert indicated that the Department has not adequately analyzed its energy R&D problems on a national basis before beginning the roadmap effort. Another was uncertain about just how the roadmaps will work. According to a laboratory director who was recently asked to comment on the roadmap process, more emphasis needs to be placed on the results that are expected from the roadmaps, rather than on the process of creating them. Furthermore, roadmapping may be difficult in some areas, especially for activities involving heavy regulatory requirements. When we asked DOE officials about roadmapping, we were told that it is still a work in progress and will not be connected directly to the budget process for months or even years. Other DOE actions are also described as works in progress. For example, the use of performance-based contracts is relatively new, and the results from the strengthened R&D Council are still uncertain. The R&D Council includes the principal secretarial officers who oversee DOE’s R&D programs and is chaired by the Under Secretary. According to DOE, the Council has a new charter that will promote the integration and management of the Department’s R&D. One area in which DOE reports that it has made significant improvements is reducing the burden of its oversight on the national laboratories. Although some laboratory directors told DOE that their laboratories are still micromanaged, most officials and experts we interviewed credited DOE with reducing oversight as the major positive change since the Galvin Task Force issued its report in 1995. DOE’s major organizational action in response to recent advisory groups’ recommendations was to create the Laboratory Operations Board in April 1995. The purpose of the Board is to provide dedicated management attention to laboratory issues on a continuing basis. The Board includes 13 senior DOE officials and 9 external members drawn from the private sector, academia, and the public. The external members have staggered, 6-year terms and are required to assess DOE’s and the laboratories’ progress in meeting such goals as management initiatives, productivity improvement, mission focus, and programmatic accomplishments. The Board’s external members have issued four reports, the results of which largely mirror past findings by the many previous advisory groups. These reports have also concluded that DOE has made some progress in addressing the problems noted by the Galvin Task Force but that progress has been slow and many of the recommendations need further actions. Several experts we interviewed generally viewed the Board positively. Some, however, recognized that the Board’s limited advisory role is not a substitute for strong DOE leadership and organizational accountability. One expert commented that the effectiveness of the Board was diminished by the fact that it meets too infrequently (quarterly) and has had too many changes in membership to function as an effective adviser. Other experts agreed but indicated that the Board still has had a positive influence on reforming the laboratory system. One expert said that the Board’s membership is not properly balanced between internal and external members (although originally specifying 8 of each, the Board’s charter was recently changed to require 13 DOE members and only 9 external members). Another expert indicated that the Board could increase its effectiveness by more carefully setting an agenda for each year and then aggressively monitoring progress to improve its management of the laboratory system. Laboratory officials we interviewed also viewed the Board in generally positive terms; some commented that the Board’s presence gives the laboratories a much needed voice in headquarters. Others noted that the Board could eventually play a role in integrating the laboratories’ R&D work across program lines, thereby addressing a major concern about the laboratories’ lack of integration noted by past advisory groups. Although the Board can be an effective source of direction and guidance for the laboratories, it has no authority to carry out reform operations. One expert said that even though the Board monitors the progress of reform and makes recommendations, it is still advisory and cannot coordinate or direct specific actions. “ remains in the future. We have seen nothing yet.” “The response appears to sidestep the important need for lab-focused budgeting and strategic planning. The response discusses strategic planning in terms of DOE roadmaps for each program, not in terms of plans for each lab. Many labs continue to have a broad mission which crosses several . . . . While there may be an ongoing review by the , the labs have no evidence this is occurring and there have been no actions to address this.” “The wanted one clear lead lab in each mission or program, and DOE did not do that; there are 2 to 4 “principal” labs for each major business. Even for major program areas, 12 of the 15 programs listed in the department’s laboratory mission plan have more than one laboratory listed as primary performer.” “. . . it is not clear that DOE has made any significant progress as the response implies. . . .” “ tone of the response in [DOE’s response] is a bit more optimistic than actual experience in the field justifies. . . . Only modest improvements have occurred to this point. . . .” “No reorganization has occurred . . . no integration has occurred.” “the examples provided to substantiate the labs working together as a system are not all new, some were in place when wrote report. Also, there have been a number of meetings between the multi-program labs but that is the extent of any progress in this area (little change has been made).” “The labs have largely been held at arm’s length rather than included as part of the team. There have been recent efforts to correct this but there is no plan or action in place to correct it.” Additionally, when we asked several laboratory officials for examples of their progress in responding to past advisory groups, most spoke of actions they have taken on their own initiative. Few could cite an example of a step taken in direct response to a DOE action. For example, several laboratory officials cited an increased level of cooperation and coordination among the laboratories involved with similar R&D activities. They also mentioned adopting “best business practices” to increase productivity, reduce overhead costs, and measure progress by improved metrics. However, many laboratory officials told us that many of their actions were taken to meet other demands, such as legislative and regulatory mandates, rather than as direct responses to the studies’ recommendations or to DOE’s policies. Despite its efforts to respond to the advisory groups’ recommendations, DOE has not established either a comprehensive plan with goals, objectives, and performance measures or a system for tracking results and measuring accountability. As a result, DOE is unable to document its progress and cannot show how its actions address the major issues raised by the advisory groups. Experts we contacted noted that while DOE is establishing performance measures for gauging how well its contractors manage the laboratories, DOE itself lacks any such measurement system for ensuring that the objectives based on the advisory groups’ recommendations are met. “lack of clarity, inconsistency, and variability in the relationship between headquarters management and field organizations has been a longstanding criticism of DOE operations. This is particularly true in situations when several headquarters programs fund activities at laboratories. . . .” DOE’s Laboratory Operations Board also reported in 1997 on DOE’s organizational problems, noting that there were inefficiencies due to DOE’s complicated management structure. The Board recommended that DOE undertake a major effort to rationalize and simplify its headquarters and field management structure to clarify roles and responsibilities. Similarly, the 1997 IDA report cited serious flaws in DOE’s organizational structure. Noting long-standing concerns in DOE about how best to define the relationships between field offices and the headquarters program offices that sponsor work, the Institute concluded that “the overall picture that emerges is one of considerable confusion over vertical relationships and the roles of line and staff officials.” DOE’s complex organization stems from the multiple levels of reporting that exist between the laboratories, field offices (called operations offices), and headquarters program offices. DOE’s laboratories are funded and directed by program offices—the nine largest laboratories are funded by many different DOE program offices. The program office that usually provides the dominant funding serves as the laboratory’s “landlord”. The landlord program office is responsible for sitewide management at the laboratory and coordinates crosscutting issues, such as compliance with environment, safety, and health requirements at the laboratories. DOE’s Energy Research is landlord to several laboratories, including the Brookhaven and Lawrence Berkeley laboratories. Defense Programs is the landlord for the Los Alamos and Lawrence Livermore national laboratories. The program offices, in turn, report to either the Deputy Secretary or the Under Secretary. Further complicating reporting, DOE assigns each laboratory to a field operations office, whose director serves as the contract manager and also prepares the laboratory’s annual appraisal. The operations office, however, reports to a separate headquarters office under the Deputy Secretary, not to the program office that supplies the funding. Thus, while the Los Alamos National Laboratory is primarily funded by Defense Programs, it reports to a field manager who reports to another part of the agency. As a consequence of DOE’s complex structure, IDA reported that unclear chains of command led to the weak integration of programs and functions across the Department, wide variations among field activities and relationships and processes, and confusion over the difference between line and staff roles. Weaknesses in DOE’s ability to manage the laboratories as an integrated system of R&D facilities is one the most persistent findings from past advisory groups, as well as from our 1995 management review of laboratory issues. We concluded that DOE had not coordinated the laboratories’ efforts as part of a diversified research system to solve national problems. Instead, DOE was managing the laboratories on a program-by-program basis. We recommended that DOE evaluate alternatives for managing the laboratories that would more fully support the achievement of clear and coordinated missions. To help achieve this goal, we said that DOE should strengthen the Office of Laboratory Management to facilitate the laboratories’ cooperation and resolve management issues across all DOE program areas. DOE did not strengthen this office. DOE’s primary response to our recommendations and those made by the Galvin Task Force was creating the Laboratory Operations Board. “DOE’s organization is a mess. You cannot tell who is the boss. DOE would be much more effective if layers were removed.” “DOE has not been responsive to recommendations for organizational changes and improvements in relationships.” Experts we consulted noted that DOE’s organizational weaknesses prevent reform. According to experts, DOE’s establishment of working groups to implement recommendations can be helpful for guiding reform, but these groups often lack the authority to make critical decisions or to enforce needed reforms. One expert commented that “the current DOE organizational structure is outdated . . . there is no DOE leadership to implement changes.” We believe these organizational weaknesses are a major reason why DOE has been unable to develop long-term solutions to the recurring problems reported by advisory groups. The absence of a senior official in the Department with program and administrative authority over the operations of all the laboratories prevents effective management of the laboratories on an ongoing basis. As far back as 1982, an advisory group recognized the need for a strong central focus to manage the laboratories’ activities. In its 1982 report, DOE’s Energy Research Advisory Board noted “layering and fractionation of managerial and research and development responsibilities in DOE on an excessive number of horizontal and vertical levels. . . .” The Board recommended that DOE designate a high level official, such as a Deputy Under Secretary, whose sole function would be to act as DOE’s chief laboratory executive. Although DOE did not make this change, the Under Secretary has assumed responsibility for ensuring that laboratory reforms are accomplished. Despite many studies identifying similar deficiencies in the management of DOE’s national laboratories, fundamental change remains an elusive goal. While the Department has many steps in process to improve its management of the laboratories—such as new strategic planning tools and the Laboratory Operations Board—the results of these efforts may be long in coming and may fall short of expectations. Other actions DOE is taking are focused more on process than on results, and most are still incomplete, making it difficult to show how DOE intends to direct the laboratories’ missions and manage them more effectively as an integrated system—a major recommendation of past advisory groups. The Department has not developed a way to show how its actions will result in practical and permanent laboratory reform. We believe that without a strategy for ensuring that reforms actually take place, DOE will make only limited progress in achieving meaningful reforms. Establishing accountability for ensuring that its actions will take place in a timely manner is a challenge for DOE. The Department’s complex organizational structure creates unclear lines of authority that dilute accountability and make reforms difficult to achieve. In our 1995 management review of DOE’s laboratories, we reported that if DOE is unable to refocus the laboratories’ missions and develop a management approach consistent with these new missions, the Congress may wish to consider alternatives to the present relationships between DOE and the laboratories. Such alternatives might include placing the laboratories under the control of different agencies or creating a separate structure for the sole purpose of developing a consensus on the laboratories’ missions. Because of DOE’s uncertain progress in reforming the laboratories’ management, we continue to believe that the Congress may wish to consider such alternatives. To ensure the timely and effective implementation of recommendations from the many past laboratory advisory groups, we recommend that the Secretary of Energy develop a comprehensive strategy with objectives, milestones, DOE offices and laboratories responsible for implementation actions, performance measures that will be used to assess success in meeting implementation objectives, a tracking system to monitor progress, and regular progress reports on the status of implementation. We provided a draft of this report to DOE for review and comment. Although DOE did not comment directly on our conclusions and recommendation, the Department said that we did not take into account the full range of changes that it has undertaken. Changes discussed by DOE include a series of initiatives implemented to strengthen management, streamline the strategic planning processes, and enhance interactions between DOE and the laboratories. The Department also said that the cumulative effect of these changes reflects significant progress in implementing the recommendations of past advisory groups. While stating that much has been accomplished to improve the management of the national laboratories, DOE also acknowledges that more needs to be done to ensure a fully integrated management system, including better focusing the laboratories’ missions and tying them to the annual budget process. DOE anticipates that these actions will take at least 2 more years to accomplish. In preparing our report, we considered the actions the Department reports it has taken to implement past recommendations from laboratory advisory groups. While the types of reported actions are positive, progress made toward the goals and objectives of reform cannot be determined without a plan for measuring progress. As we state in our report, some laboratory directors have reported to DOE that they have not seen the results of some of these actions at their level. We continue to believe that DOE needs to monitor, measure, and evaluate its progress in accomplishing reforms. If it does not do so, it will have difficulty holding its managers accountable for making the needed changes and determining if funds are being spent wisely on the reform process. Appendix VI includes DOE’s comments and our response. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Secretary of Energy and the Director, Office of Management and Budget. We will make copies available to other interested parties on request. Our review was performed from December 1997 through August 1998 in accordance with generally accepted government auditing standards. See appendix V for a description of our scope and methodology. If you or your staff have any questions about this report, please call me at (202) 512-3841. Major contributors to this report are listed in appendix VII. Lockheed Martin Idaho Technologies Co. Sandia Corp. (Lockheed Martin) University Research Assoc., Inc. Southeastern Univ. Research Assoc., Inc. Westinghouse Electric Corp. KAPL, Inc. (Lockheed Martin) Westinghouse Savannah River Co. Department of Energy: Clearer Missions and Better Management Are Needed at the National Laboratories (GAO/T-RCED-98-25, Oct. 9, 1997). External Members of the Laboratory Operations Board Analysis of Headquarter and Field Structure Issues, Secretary of Energy Advisory Board, DOE (Sept. 30, 1997). Third Report of the External Members of the Department of Energy Laboratory Operations Board, Secretary of Energy Advisory Board, DOE (Sept. 1997). DOE Action Plan for Improved Management of Brookhaven National Laboratory, DOE (July 1997). The Organization and Management of the Nuclear Weapons Program, Institute for Defense Analyses (Mar. 1997). Status of Federal Laboratory Reforms. The Report of the Executive Office of the President Working Group on the Implementation of Presidential Decision Directive PDD/NSTC-5, Office of Science and Technology Policy, Executive Office of the President (Mar. 1997). Roles and Responsibilities of the DOE Nuclear Weapons Laboratories in the Stockpile Stewardship and Management Program (DOE/DP-97000280, Dec. 1996). Second Report of the External Members of the Department of Energy Laboratory Operations Board, Secretary of Energy Advisory Board, DOE (Sept. 10, 1996). First Report of the External Members of the Department of Energy Laboratory Operations Board, Secretary of Energy Advisory Board, DOE (Oct. 26, 1995). Future of Major Federal Laboratories, National Science and Technology Council (Aug. 1995). Energy R&D: Shaping Our Nation’s Future in a Competitive World, Final Report of the Task Force on Strategic Energy Research and Development, Secretary of Energy Advisory Board, DOE (June 1995). Interagency Federal Laboratory Review Final Report, Office of Science and Technology Policy, Executive Office of the President (May 15, 1995). Department of Energy: Alternatives for Clearer Missions and Better Management at the National Laboratories (GAO/T-RCED-95-128, Mar. 9, 1995). Report of the Department of Energy for the Interagency Federal Laboratory Review in Response to Presidential Review Directive/NSTC-1 (Mar. 1995). Alternative Futures for the Department of Energy National Laboratories, Secretary of Energy Advisory Board Task Force on Alternative Futures for the Department of Energy National Laboratories, DOE (Feb. 1995). Department of Energy: National Laboratories Need Clearer Missions and Better Management (GAO/RCED-95-10, Jan. 27, 1995). DOE’s National Laboratories: Adopting New Missions and Managing Effectively Pose Significant Challenges (GAO/T-RCED-94-113, Feb. 3, 1994). Changes and Challenges at the Department of Energy Laboratories: Final Draft Report of the Missions of the Laboratories Priority Team, DOE (1993). Final Report, Secretary of Energy Advisory Board (1992). U.S. Economic Competitiveness: A New Mission for the DOE Defense Programs’ Laboratories, Roger Werne, Associate Director for Engineering, Lawrence Livermore National Laboratory (Nov. 1992). A Report to the Secretary on the Department of Energy National Laboratories, Secretary of Energy Advisory Board Task Force on the Department of Energy National Laboratories, DOE (July 30, 1992). Progress Report on Implementing the Recommendations of the White House Science Council’s Federal Laboratory Review Panel, Office of Science and Technology Policy, Executive Office of the President (July 1984). The Management of Research Institutions: A Look at Government Laboratories, Hans Mark and Arnold Levine, Scientific and Technical Information Branch, National Aeronautics and Space Administration (1984). Report of the White House Science Council Federal Laboratory Review Panel, Office of Science and Technology Policy, Executive Office of the President (May 20, 1983). President’s Private Sector Survey on Cost Control Report on the Department of Energy, the Federal Energy Regulatory Commission, and the Nuclear Regulatory Commission (1983). The Department of Energy Multiprogram Laboratories: A Report of the Energy Research Advisory Board to the United States Department of Energy (Sept. 1982). Final Report of the Multiprogram Laboratory Panel, Volume II: Support Studies, Oak Ridge National Laboratory (Sept. 1982). The Multiprogram Laboratories: A National Resource for Nonnuclear Energy Research, Development and Demonstration (GAO/EMD-78-62, Mar. 22, 1978). Robert Galvin (Chairman) Chairman, Executive Committee Motorola, Inc. | Pursuant to a congressional request, GAO reviewed the Department of Energy's (DOE) progress in making needed management reforms in its national laboratories, focusing on: (1) the recommendations made by various advisory groups for addressing management weaknesses at DOE and the laboratories; and (2) how DOE and its laboratories have responded to these recommendations. GAO noted that: (1) for nearly 20 years, many advisory groups have found that while DOE's national laboratories do impressive research and development, they are unfocused, micromanaged by DOE, and do not function as an integrated national research and development system; (2) weaknesses in DOE's leadership and accountability are often cited as factors hindering fundamental reform of the laboratories' management; (3) as a result, advisory groups have made dozens of recommendations ranging from improving strategic planning to streamlining internal processes; (4) several past advisory groups have also suggested major organizational changes in the way the laboratories are directed; (5) to address past recommendations by advisory groups, DOE, at GAO's request, documented the actions it has taken, from creating new task forces to developing strategic laboratory plans; (6) while DOE has made some progress--principally by reducing paperwork burdens on its laboratories--most of its actions are still under way or have unclear outcomes; (7) furthermore, these actions lack the objectives, performance measures, and milestones needed to effectively track progress and account for results; (8) consequently, the Department cannot show how its actions have resulted, or may result, in fundamental change; (9) for example, its Strategic Laboratory Missions Plan, which was developed to give more focus and direction to the national laboratories, does not set priorities and is not tied to the annual budget process; (10) few experts and officials GAO consulted could show how the plan is used to focus missions or integrate the laboratory system; (11) DOE's latest technique for focusing the laboratories' missions is the technology roadmap; (12) roadmaps are plans that show how specific DOE activities relate to missions, goals, and performers; (13) roadmaps are a promising step but have been used in only a few mission areas and are not directly tied to DOE's budget process; (14) moreover, several laboratory directors questioned both the accuracy of the actions DOE has reported taking and their applicability at the laboratory level; (15) DOE's organizational weaknesses, which include unclear lines of authority, are a major reason why the Department has been unable to develop long-term solutions to the recurring problems reported by advisory groups; and (16) although DOE created the Laboratory Operations Board to help oversee laboratory management reform, it is only an advisory body within DOE's complex organizational structure and lacks the authority to direct change. |
The Total Army Analysis process has evolved from one that determined the requirements for and allocated authorized personnel to units involved in war-fighting to one that does this for the entire Army. Although Total Army Analysis 2005 included some analysis of requirements for the “institutional Army,” the current version is the Army’s first attempt to identify requirements for the total Army. This analysis includes units required to fight two major-theater wars, forces needed to meet treaty requirements, and the institutional forces needed to augment and support these operations. The Army’s expanded analysis is an acknowledgment that its entire force structure supports its war-fighting element in one way or another. To quantify and communicate these requirements, Total Army Analysis 2007 determined the forces it needs by summing its requirements in five categories: War-fighting — This category includes combat and support forces that would deploy to fight two nearly simultaneous major theater wars. The Army starts with the combat forces specified in the Department of Defense (DOD) guidance and then determines the support forces needed to support its combat troops through quantitative analysis using computer modeling. For the first time, the Army also determined the requirements for a post-hostilities phase of the war in addition to the actual conflict stage. Subject matter experts were used to determine these post-hostilities requirements by analyzing the forces needed to perform an agreed-upon list of mission tasks. Small Scale Contingencies —This category includes those forces needed to meet certain treaty commitments since these missions would need to continue even in wartime. The Army assumes that all other forces engaged in contingencies would be re-deployed to war-fighting if a conflict arose and therefore does not calculate additional requirements for such contingencies as part of its Total Army Analysis. Strategic Reserve, Domestic Support, and Homeland Defense Operations —These are the forces needed to augment the major theater war requirements, conduct post-hostility operations, perform jobs left vacant by deploying forces, provide national missile defense, respond to incidents involving weapons of mass destruction, protect critical infrastructure, and provide military assistance to civilian authorities. Base Generating Force —This category includes those U.S.-based institutional force positions whose personnel provide for, access, organize, train, equip, maintain, project, redeploy, and restore Army forces. Military, civilian, and contractor personnel fill these positions. Base Engagement Force —This category includes those positions needed to meet the continuous/long-term forward presence that shapes the theater in support of U.S. interests. It includes all overseas institutional force positions currently filled by military, civilian, and contractor personnel. Once the Army sums up its force structure requirements from these five categories, it then compares its currently authorized force with these requirements to identify shortfalls. The Army then prepares a plan for reallocating forces to fill some unmet requirements in a manner that is expected to reduce war-fighting risk. This plan may include converting some types of forces into other types where critical shortfalls are projected. These reallocations and conversions will be made from fiscal year 2002 through fiscal 2007. Table 1 shows the results of Total Army Analysis 2007, including the distribution of the Army’s requirements among the five categories, the Army’s allocation of forces to meet these requirements, and the specific shortfalls that were identified. The Army has made significant progress toward making the Total Army Analysis a more credible and comprehensive process for determining requirements and identifying shortfalls in planned force structure. In the most recent analysis, the Army made the scenarios in its models for war-fighting forces more realistic, revised some assumptions to reflect more current data, and integrated the latest Army plans and innovations for reorganizing forces and modernizing logistics. To make the analysis more comprehensive, the Army calculated requirements for the entire Army to include civilian personnel and contractors—not just the military personnel associated with war-fighting. However, the Army is still refining the process and will need to address certain shortcomings before it has a sound process in place for determining all requirements. Over time, the Army has enhanced its analysis to provide a sounder basis for its war-fighting requirements. It has done this by incrementally incorporating more realistic and stringent assumptions and planning factors. During the most recent analysis, the Army included several changes that made Total Army Analysis 2007 more realistic and complete, some of which are related to our past recommendations. The major changes are as follows: In our review of Total Army Analysis 2005, we recommended that the Army develop more realistic scenarios to use in assessing its ability to win the two major-theater wars and in calculating the required force structure. Total Army Analysis 2007 uses more realistic scenarios, taking into account, for example, the effects of the enemy’s use of chemical and biological weapons, including those delivered by theater ballistic missiles. As a result, the Army identified the need for about 5,000 more medical personnel to treat casualties caused by chemical and biological weapons. In addition, the analysis allowed the Army to gauge the impact of these weapons on the ability of the United States to move personnel and cargo through seaports and airfields. In our reviews of the Army’s 2003 and 2005 analyses, we noted that the Army had not assessed how war-fighting might be affected by DOD guidance to redeploy forces from contingency operations to the war-fight. Thus, it did not know if disengaging units from ongoing contingency operations would present an obstacle to carrying out the National Military Strategy or if its force structure contained the numbers and types of units needed for the contingency operations. We found that the Army addressed both questions in Total Army Analysis 2007. We also recommended in our review of Total Army Analysis 2005 that the Army include in its analysis all phases of the wars. In Total Army Analysis 2007, the Army added a requirement for the post-hostilities phase of the wars. This phase was needed to recognize that, once the war was over, there would be a continuing need for forces to provide security, handle prisoners of war, and exercise control over the local population. In its 2007 analysis, the Army assessed the requirements for this phase and added about 12,000 personnel to its war-fighting requirements. The analysis has also been modified to integrate more current Army plans and initiatives. For example, advances in digital technology under the Army’s Force XXI initiative improved the lethality of Army tank units and allowed the Army to reduce the number of tanks per unit. Fewer crews, along with fewer vehicles to maintain, reduced the number of personnel required for an armored division. Also, the Army is currently pursuing a major initiative to transform the Army into a force that is more strategically responsive to the complete spectrum of operations. Although this transformation is still in its early stages and operational and logistical plans have not been fully developed, the analysis did include the known characteristics of the transformed force. The Army has also incorporated a number of logistics planning factors and improvement initiatives that together have reduced requirements for military support personnel by about 7 percent, or 17,000 personnel. These factors and improvements include the following: Revised medical planning factors specify that 80 percent of patients will be evacuated directly to the United States or other out-of-theater medical facilities, thereby reducing the number of medical personnel required in the theater. The logistics community is fielding digitized control systems, satellite- based movement tracking systems, and improved cargo-handling equipment that Army officials estimate will allow a 15 percent reduction in theater stockage levels and the personnel required to manage them. Improved vehicle engines are expected to reduce fuel consumption in theater by about 25 percent, thus requiring fewer people to transport, dispense, and guard fuel stocks. Total Army Analysis 2007 determined that 725,000 personnel were required to fight the two major theater wars, down from the 747,000 total reported in Total Army Analysis 2005. The 45,000-position shortfall in the war-fighting element of its force structure is also less than the 72,000-position shortfall identified in Total Army Analysis 2005. Army officials believe that this represents a reduction in war-fighting risk. Previous Army planning analyses did not include a requirement specifically to meet the needs of contingency operations because Army officials believed that DOD guidance did not allow the Army to create new units for such purposes. This is because it was presumed that these forces would disengage and redeploy to conflicts if they arose and therefore did not represent additive requirements. During Total Army Analysis 2007, however, the Army determined that two contingency operations would need to continue even if conflicts arose, since they represented U.S. treaty commitments. These commitments are for operations in the Sinai to satisfy agreements under the 1979 Middle East Peace Treaty and for a rapid reaction force in Europe to satisfy Article 5 of the North Atlantic Treaty Organization Treaty. Accordingly, 17,000 personnel needed to satisfy these two treaty obligations were included in the Army’s total requirements. Also included in the Army’s determination of total requirements was a requirement for Strategic Reserve/Domestic Support/Homeland Defense forces, but the Army had not yet developed criteria for determining these requirements. DOD guidance allows force structure for these purposes but does not specify how the size of the force should be determined. Lacking criteria, the Army made the requirements for these missions equal to six National Guard divisions (about 88,000 personnel), which had not been given a specific mission in the war-fighting element. These National Guard forces have historically been treated as a hedge against larger-than-expected major conflicts. However, the appropriate size of the Strategic Reserve, and the National Guard divisions themselves, have been debated by DOD and others. DOD’s 1997 Quadrennial Defense Review and a subsequent congressionally mandated review panel found that the need for a large strategic reserve had declined. The Quadrennial Defense Review identified other missions for the National Guard divisions, such as supporting the mobilization of early deploying units and performing crisis response for floods, hurricanes, or civil disturbances. Later, DOD assigned the Army National Guard a role in responding to attacks using weapons of mass destruction. However, without appropriate criteria for determining the size of the forces needed to carry out these additional missions, the Army has no assurance that its requirement for these missions is valid or that the forces assigned could not be better used elsewhere. In Total Army Analysis 2007, the Army made its first attempt to include its institutional force requirement as part of the Army’s overall requirement. However, the Army’s process for determining these requirements is still evolving and, as a result, does not yet provide a sound basis for these requirements. Because the Army used questionable data to develop some requirements, we believe that the overall requirement for the institutional force is, at a minimum, substantially overstated. In general, the institutional force performs a broad range of functions for the Army, enabling combat and support units to deploy to and fight the theater wars. These forces support Army activities such as training, doctrine development, base operations, supply, and maintenance. In Total Army Analysis 2007, the institutional force requirements are in two separate categories: (1) the Base Engagement Force for overseas requirements and (2) the Base Generating Force for U.S.-based requirements. Both of these forces include military, civilian, and contractor personnel. Base Generating Force requirements were overstated because of questionable data provided by the major commands, which are responsible for determining their own requirements. To aggregate these requirements, the Army convened a series of panels composed of representatives of each command to provide their respective requirements. This process yielded a total requirement of about 800,000 institutional positions, which was entered into the Total Army Analysis 2007 process. Army officials told us that the panels reviewed the requirements and brought about some limited changes to the requirements. However, the panels generally accepted the requirements as submitted by the major commands, relying on the methodologies and processes used by each of the major commands to ensure their validity. Historically, the Army has had difficulty arriving at valid institutional requirements. In DOD’s fiscal year 1997 Annual Statement of Assurance to the Congress, provided pursuant to the Federal Managers’ Financial Integrity Act of 1982 (Pub. L. 97-255, Sept. 8, 1982), the Army reported a material weakness in its ability to properly identify institutional force requirements. The report said that the current system lacks the ability to link workload to manpower requirements and is not capable of determining institutional requirements based on workload. To address the material weakness, the Army’s Manpower Analysis Agency in April 1998 initiated a program to certify the methodologies that major Army commands use to determine their manpower requirements. To date, the agency has endorsed the manpower assessment methodologies used by each command, and it is currently assessing the accuracy of the commands’ institutional manpower requirements by conducting on-site reviews. It does this by applying an Army-approved requirement determination process to activities within the commands. The agency is reviewing 100 percent of the institutional requirements at each major command headquarters and a random sample of the commands’ subordinate field activities. Where problems are found at major command headquarters, the agency’s findings are binding and requirements must be adjusted. Recommended changes to the requirements of each command’s field activities are advisory. We used the results of the Manpower Analysis Agency’s reviews to obtain an indication of the accuracy of major commands’ requirements. These results indicate that some of the institutional requirements used in Total Army Analysis 2007 were overstated. As of January 2001, the agency had assessed three major command headquarters and two of the commands’ field activities. These results show that one activity understated its requirements by about 9.5 percent, while the other activities overstated their requirements by percentages ranging from about 5 to 22 percent. Table 2 shows the activities reviewed and the results of the Manpower Analysis Agency’s assessments. We projected the results from the sample of field activities in table 2 to the modified population of field activities in the two commands reviewed by the Manpower Analysis Agency. We then combined these projections with the agency’s findings related to its 100-percent review of headquarters requirements. In this way, we determined that the three commands reviewed had overstated their overall institutional force requirements by about 16,000 personnel positions, or about 20 percent. The Manpower Analysis Agency’s on-site analyses varied from the commands’ own requirements determination for various reasons. For one activity, the agency reported that manpower standards had not been updated in a timely manner, the activity had not applied the standards in several years, workloads had increased/decreased since the last standards application, and work center missions had changed since the standards had been developed. In another instance, the agency noted that manpower standards had not been updated in 10 years. Another reason why the Manpower Analysis Agency’s results varied from the Commands’ results is that the agency assessed whether realignments or more efficient work procedures would save positions. For example, in one study report, the agency recommended a realignment of two activities on the grounds that like-type functions should not be separated if the result is additional overhead positions. Given these known overstated requirements and the Army’s acknowledged weakness in determining these requirements, we assessed the potential effect of such inaccuracies on the reported 142,000-position shortfall in institutional forces. Recognizing that the results of the Manpower Analysis Agency’s reviews could not be statistically projected to the remaining commands not yet reviewed, we used three hypothetical levels of overstated requirements to estimate the effect. As shown by the first column of table 3, if the 20 percent overstatement that the Manpower Analysis Agency found in five activities were applied to the remaining Base Generating Force, the remaining commands may have overstated their requirements by about 143,000 personnel. Together with the 16,000- positions already found to be in error, these latter adjustments would be more than enough to totally eliminate the shortfall and actually result in a 16,000-position excess. The second column shows this same comparison if one assumes that the institutional force requirements were overstated by only 10 percent (one-half the percentage of overstatement found to date). It results in a shortfall of only 55,000. Finally, the third column shows a breakeven point. That is, we calculated that if the remaining commands’ estimates turned out to be overstated by 17.7 percent, the shortfall would be completely eliminated. In general, the requirements data resulting from the Manpower Analysis Agency’s assessments were not available in time to be included in Total Army Analysis 2007. Army force planners agreed that there were inaccuracies in the institutional requirements used in Total Army Analysis 2007, but the data were used because they were the best available. Army planners told us that the requirements may be reduced in future analyses as the Manpower Analysis Agency completes additional reviews of the major commands’ requirements determination processes. Although these officials expected these reviews to result in better data from the major commands in time for use in Total Army Analysis 2009, the Army has no firm plans for adjusting requirements on the basis of these results. Furthermore, the Manpower Analysis Agency has made limited progress in reviewing the major commands. The Army’s original plan said it would complete all actions necessary to ensure valid institutional requirements by March 2000. Army officials determined that this goal was ambitious, and in the 1999 Annual Assurance Statement the Army revised the completion date for all manpower studies to March of 2002. However, as of January 2001, the Manpower Analysis Agency had completed reviews of only two major commands, and Army officials told us that because of staffing limitations and the volume of workload, they do not expect to complete their work by the scheduled date. In our 1998 report on the Army’s institutional forces, we noted that a lack of staff could delay the completion of the Manpower Analysis Agency’s quality assurance reviews. The total requirements (1.717 million positions) and total resources (1.530 million) reported in Total Army Analysis 2007 do not accurately reflect the actual number of personnel needed by the Army. For example, a military technician employed by a National Guard unit fills a requirement for a civilian employee in that unit. However, the technician is also required to be a member of the Guard unit, and thus also fills a military requirement in that unit. Thus, when requirements are totaled, they include both requirements, even though only one person fills both positions. As a result of this methodology, Total Army Analysis 2007 showed that the Army needed about 30,000 more personnel (the approximate number of military technicians employed by the reserve components) than the actual number of people required for the Base Generating Force. A similar situation exists in the Strategic Reserve/Domestic Support/Homeland Defense category, where about 47,000 National Guard personnel are “dual tasked” to meet requirements in that category as well as in one of the other categories. These special situations were not fully discussed in the Army’s presentation of requirements and resources, potentially leading to misunderstandings as to the number of personnel the Army needs to fully meet its requirements. However, this methodology does not affect the reported 142,000-position shortfall, because the Army also allocated these resources twice when matching available forces against requirements. In reviewing the Army’s analysis, we identified several actions that the Army could take to lessen the risk that is seemingly posed by the 45,000 gap between requirements and resources in the war-fighting category. While this is the lowest shortfall the Army has identified in the last three cycles of Total Army Analysis, we believe there is even greater potential for reducing this gap or mitigating the risks it entails. These actions include (1) accelerating the Army’s plan to convert some Army National Guard combat forces to support forces; (2) converting about 12,000 military positions to civilian positions, as the Army has already identified; and (3) examining more fully how host nations could meet some of the unmet support requirements. Each of these actions would pose certain implementation and budgetary challenges, and the Army’s leadership would need to carefully weigh whether the risk reduction it achieves by reducing these shortfalls further is worth the extra resources required. Since the Army takes war-fighting risk into account when deciding what requirements should be filled, the Army may determine that it has already met its most critical needs and that driving down the remaining 45,000-shortfall to even lower levels, via these options, is not the best investment the Army can make with its available resources. One action that the Army could take to fill some of the requirements represented by the war-fighting shortfall would be to accelerate its plan to convert some National Guard combat forces to support forces. The Army is in the process of implementing Phases I and II of the plan, which, together, will convert six National Guard combat brigades to support forces to help meet a chronic shortfall in certain types of forces. These conversions are expected to be completed by fiscal 2007 and were included in Total Army Analysis 2007. As a result, the war-fighting shortfall was reduced by about 20,000 positions. Under current plans, the Army would not complete Phases III and IV of this program—representing a conversion of about 28,000 additional combat positions—until 2009. In order to accelerate its conversion schedule, the Army would need to budget additional funds as well as overcome some implementation challenges. Currently, the Army has not identified the units it intends to convert under Phases III and IV. Army officials said that three points must be addressed before additional units can be converted. First, the Army’s initiative to transform itself into a lighter, more mobile force makes it likely that the specific types of support units needed will change significantly in the near future, making it difficult to identify the types of conversions needed. Second, the National Guard is concerned that converting combat units to support units may decrease the rank structure (the number of senior vs. junior positions available in the units) and limit the potential for progression of its officers and enlisted personnel. The concern is that this might make it harder for the National Guard to attract and retain personnel. Third, the Army’s ability to convert combat units to support units hinges, to a large extent, on the willingness of state National Guard officials to accept conversion to the specific types of units the Army needs. With respect to funding, the Army would need to budget additional funds to carry out the variety of tasks related to these conversions, such as procurement of equipment and construction of facilities. While the Army has not estimated the total costs of all conversions, it budgeted about $2.4 billion to pay for conversions under Phases I and II. The costs for Phases III and IV would likely be of a similar magnitude. The Army may be able to reallocate some military end-strength to fill positions in the war-fighting element if it follows through in converting identified military positions in commercial-type activities to civilian or contractor positions. Defense guidance states that the services should reduce forces not required to support missions envisioned by the National Military Strategy and minimize the number of military personnel assigned to support organizations. The guidance further states that positions that do not meet military essential requirements will be eliminated or converted to civilian positions. In fiscal year 1998, the Deputy Secretary of Defense issued Department of Defense Reform Initiative Directive No. 20, which, among other things, directs the services to identify military positions that are candidates for conversion to civilian/contract employee jobs. During the Total Army Analysis 2007 process, the Army identified 11,757 active duty military positions at 15 major Army commands that were conversion candidates. Army officials told us that they had already converted about 582 of these positions, freeing this military end-strength to meet other Army needs. Officials said that more analysis might be needed before proceeding with more conversions, since varying degrees of risk are associated with the conversion candidates. Moreover, officials estimated that about $1.04 billion in additional funding would be necessary to hire the civilians and contractors needed to replace the military positions. Assessing the risks associated with the conversions is important, however, this much additional funding may not be required if further Manpower Analysis Agency reviews yield more overstated requirements in the Army’s institutional force, thereby allowing personnel to be reallocated. A final factor that could mitigate the Army’s reported shortfall is the potential for host nations to provide some unmet support requirements. While some positions could be filled only by U.S. personnel, Army and theater command officials agree that, in the event of war, host nations can provide some types of war-fighting support. Also, DOD guidance and Army regulations state that the Army should consider the availability of this support to reduce unmet requirements. However, only a small portion of the host nation support estimated to be available was included in Total Army Analysis 2007. Specifically, the Army concluded that anticipated host nation support would offset the need for about 1,300 positions in its war- fighting requirement and factored this into its analysis. This is a small proportion of the 30,000 positions that Army officials have estimated that host nations might be able to provide in the two most likely areas for war. The issue of how host nation support should be treated with respect to requirements is one of continuing debate within DOD. Regional commanders generally consider such support as potentially available to augment U.S. forces but do not believe it prudent to rely on host nation support as a substitute for Army units in case the support does not materialize. Army officials said that they would not consider host nation support as filling requirements without the concurrence of the regional commanders. Currently, at least one regional commander is attempting to produce validated lists of host nation support commodities and services available from host nations. The Army would need to fully weigh the risk that anticipated host nation support may not materialize in deciding whether to offset more positions. However, better information on these potential resources from all regional commanders would be useful in assessing risk during Total Army Analysis. The Army has made progress in developing a sound basis for its force structure requirements. It has improved the rigor of its analysis through more realistic scenarios and the integration of Army plans and initiatives, and made the analysis more comprehensive by expanding it to include requirements for the entire Army. However, the weaknesses we identified suggest that the Army still does not have a sound basis for its institutional force requirements or the forces needed for the Strategic Reserve, Domestic Support, and Homeland Defense. Our analysis of the institutional force requirements casts doubt on their accuracy, and, by extension, the accuracy of the shortfall that the Army identified in this element of the force. By developing more accurate estimates of institutional forces, this shortfall might be entirely eliminated. The fact that the Manpower Analysis Agency has already identified an average overstatement of 20 percent in three commands is significant, as it suggests that inaccuracies remain in the institutional force requirements, which comprise over half of the Army’s total requirements. It is, therefore, important that the agency expeditiously complete its review of major commands and that the Army resolve its material weakness in requirements determination. Because the program to accomplish this lags well behind schedule, additional staff or contractors might be needed to complete these reviews by 2002 as planned. The sooner these reviews are completed, the sooner the Army will know whether it can reduce positions in the institutional forces and apply any savings to cover some of the shortfall in its war-fighting forces. Furthermore, this significant potential to improve the accuracy of requirements data can be realized only if the results of the Manpower Analysis Agency’s reviews are actually used in the Total Army Analysis process to adjust requirements. A stronger Army commitment to use these results in this way is needed if the Army is to overcome the material weakness it has identified in establishing institutional force requirements. A sound basis for requirements is also hampered by the lack of criteria for the Strategic Reserve, Domestic Support, and Homeland Defense element of the Army’s force structure. A clearer definition of the missions involved is needed to accurately estimate the forces needed for these missions. The risk of not setting criteria for this force is that the Army may not have enough of these forces or the right types. Conversely, if too many forces have been committed for this purpose, the Army may be unnecessarily diverting forces to this mission that could be better used elsewhere. The Army’s method of portraying the requirements for military technicians and some National Guard positions needs to more accurately reflect the actual number of personnel needed. Because one person fills more than one requirement, the actual number of personnel needed is misstated by about 77,000. The actions suggested in this report to mitigate the risks of the identified 45,000-position shortfall in war-fighting forces must be considered within the context of both cost and risk. A clear understanding of the risks that this shortfall represents is necessary to decide what actions should be taken or whether actions should be taken at all. Accelerating the conversion of National Guard combat forces to support forces may pose challenges for the Army and would require added funding. Similarly, converting additional military positions to civilian positions should be done only after a careful consideration of the risks. This action may or may not require added funding, depending on the Army’s success in achieving more accurate estimates for its institutional force requirements. Fully identifying and acknowledging all available resources, including potential support from host nations, would provide a more accurate portrayal of the risks associated with the shortfall and allow Army planners to be better equipped to decide the types of units to build or maintain. To improve the accuracy of the Army’s force structure requirements, we recommend that the Secretary of Defense direct the Secretary of the Army to incorporate the following changes into future versions of the Total Army Analysis process: Use the results of completed Manpower Analysis Agency reviews to adjust requirements for the Base Generating Force and Base Engagement Force. Furthermore, explore alternative means of expediting the completion of these studies at the remaining Army commands, whether by expanding the existing Manpower Analysis Agency team or through the use of contractor personnel. Establish mission criteria to provide a firmer basis for Strategic Reserve, Domestic Support, and Homeland Defense requirements. Establish a methodology for more accurately portraying requirements for military technicians and other National Guard positions where one person is filling more than one requirement, thereby precluding a potential misunderstanding of the personnel needed. We also recommend that the Secretary of Defense direct the Secretary of the Army to examine the options we outlined to address the 45,000- position shortfall in the Army’s war-fighting force within the context of costs and risks, and decide if mitigating actions should be taken. These actions include the accelerated conversion of National Guard forces to support forces, the conversion of military positions to civilian or contractor positions, and the consideration of how host nations could meet some unmet support needs. In written comments on a draft of this report, the Department of Defense concurred with our recommendations. Recognizing a need for improvement, the Department said it would advise the Army to strengthen the manpower determination process, regularly update manpower standards, review institutional requirements more frequently, and incorporate the re-sized requirements into Total Army Analysis. The Department commented, however, that we used the Army’s limited review findings to estimate the total number of requirements overstated, and that extending the results of the Army’s sample across the institutional force might be misleading. To clarify, we did not project the results of the Army’s two samples to the entire institutional force. Rather, we projected these results only to selected work centers within the two commands from which the sample was drawn. Concerning the lack of criteria for estimating its requirements for the Strategic Reserve, Homeland Defense, and Domestic Support missions, the Department of Defense said that it has an ongoing strategic review to establish such requirements and that the results will be incorporated into the Army’s planning process. In order to be of value to Total Army Analysis, we believe Defense’s study will need to provide enough specificity that the Army can project the number and types of units that will be needed to carry out these missions. To improve reporting of requirements, the Department will advise the Army to footnote the results of its planning process to acknowledge the dual-status nature of the manpower requirements associated with military technicians. We believe this footnote should clearly identify those instances where two requirements may be filled by one person. The Department also agreed to assess the options for mitigating the risk of shortfalls in war-fighting forces that were outlined in the report, stating that it will continue to optimize war-fighting capabilities within the limits of policy, end strength and budget. We believe these actions by Defense and the Army, once implemented, will improve the Army’s process for determining and reporting its force structure requirements and the allocation of resources against those requirements. Defense’s comments are reprinted in appendix III. To assess the basis for the Army’s projected force requirements and the validity of reported shortfalls, we reviewed pertinent documents related to the Total Army Analysis 2007 process, including the total requirements it identified, the forces available to meet those requirements, and the shortfall in forces reported by the Army. We also obtained data on the key assumptions and factors used in the analysis, and identified improvements in the process. We visited the Center for Army Analysis at Fort Belvoir, Virginia, to document the incorporation of these factors into the analysis. We also visited the Combined Arms Support Command at Fort Lee, Virginia, to discuss its input to the Army’s analysis. To assess the validity of the shortfall in institutional forces and explore alternatives for reducing it, we visited the Office of the Assistant Secretary of the Army, Manpower and Reserve Affairs to discuss efforts to resolve the material weakness previously reported in this area. We also visited the Army’s Manpower Analysis Agency at Fort Belvoir, Virginia, and obtained the results of manpower assessments they had completed. We analyzed the agency’s data and used it to assess the validity of the Army’s institutional force requirements. To identify factors that could mitigate the risk posed by shortfalls in war- fighting forces, we met with Army National Guard officials responsible for implementing the Army National Guard Division Redesign Study recommendations, and with the Army force planning officials who tracked decisions reached during the Total Army Analysis process. We conducted our review from March 2000 through February 2001 in accordance with generally accepted government auditing standards. For further information on our scope and methodology, see appendix I. We are sending copies of this report to the Honorable Donald H. Rumsfeld, Secretary of Defense; the Honorable Joseph W. Westphal, Acting Secretary of the Army; and the Honorable Mitchell E Daniels, Jr., Director, Office of Management and Budget. We will also make copies available to others upon request. Please contact me at (202) 512-5140 if you or your staff have any questions concerning this report. GAO contact and staff acknowledgements are listed in appendix IV. In fiscal year 1997, the Secretary of the Army declared that the Army’s manpower requirements determination for its institutional force was a material weakness under the Federal Manager’s Financial Integrity Act. As a result of the declared weakness, the Army is using its Manpower Analysis Agency to certify the requirements-determination process in all Army major commands. As part of the certification, the Manpower Analysis Agency is (1) examining all requirements at the headquarters and (2) examining all requirements in a randomly sampled 2 percent of the work centers in most major functional areas below the headquarters level. For each command the agency has reviewed, the Army provided the requirements originally stated by the major command and the subsequent requirements that the Manpower Analysis Agency recommended while certifying the major commands’ requirements-determination process. Such information is available for only the headquarters-and-below level of the Training and Doctrine Command and Forces Command, and the headquarters for the National Guard Bureau. The agency sampled all 1,460 requirements for the Training and Doctrine Command headquarters. Data gathered as part of the certification process showed that the agency recommended 1,598 requirements. That is, the agency recommended increasing the command’s requirements by 138, or 9.5 percent, from the level originally reported by that major command. Because all headquarters requirements were sampled, no sampling error is associated with the agency’s recommended 1,598 requirements. Table 4 shows the population and sample for the work centers below the Training and Doctrine Command headquarters level. Although the command reported 19 major areas with 6,474 work centers and 80,162 requirements, 7 major areas were not included in the certification process (indicated by the shaded areas in table 4). The largest number of work centers and requirements eliminated from the certification process were in base operations, an area that will be reviewed later because of concerns about some of the jobs possibly being privatized. After the 7 major areas were eliminated, there were 3,337 work centers and 49,123 requirements in the modified population. Training and Doctrine Command records show there were 1,551 requirements in the 90 sampled work centers. After completing its certification process, the Manpower Analysis Agency recommended staffing the 90 work centers with 1,207 requirements—a decrease of 22.2 percent. When the sample-based recommendations were weighted and projected to the modified population, we found that the Training and Doctrine Command needs 37,923 requirements (with a precision of ±3,562 requirements) for the subgroup of work centers in the modified population. No projection can be made to the 3,137 work centers and 31,039 requirements that were excluded from the Manpower Analysis Agency’s certification study. The agency sampled all 1,574 requirements for the Forces Command headquarters. The data gathered as part of the certification process showed that the agency recommended 1,499 requirements—a reduction of 75, or 4.8 percent, from the requirements originally reported by that major command. Because all headquarters requirements were sampled, no sampling error is associated with the agency’s recommended 1,499 requirements. Table 5 shows the population and sample for the work centers below the Forces Command headquarters level. The 2-percent sampling was performed somewhat differently for Forces Command than for the Training and Doctrine Command. All major functional areas except Training Support Brigade were included in the sample, but work centers subject to possible privatization were excluded from almost every functional area. As shown in the table 5 (next to the last line), 2,107 of the 4,711 Forces Command work centers and 19,026 of its 42,222 requirements were excluded from the Manpower Analysis certification study. Forces Command records show there were 806 requirements in the 72 sampled work centers. After completing its certification process, the Manpower Analysis Agency recommended staffing the 72 work centers with 647 requirements—a decrease of 19.7 percent. When the sample- based recommendations were weighted and projected to the modified population, we found that the Forces Command needs 19,801 requirements (with a precision of ±1,538 requirements) for the subgroup of work centers in the modified population. No projection can be made to the 2,107 work centers and 19,026 requirements that were excluded from the Manpower Analysis Agency certification study. The agency sampled all 1,340 requirements for the National Guard Bureau headquarters. The data gathered as part of the certification process showed that the agency recommended 1,049 requirements—a reduction of 291, or 21.7 percent, from the requirements originally reported by that major command. Because all headquarters requirements were sampled, no sampling error is associated with the agency’s recommended 1,049 requirements. In progressing from its Total Army Analysis (TAA) 2003 through its TAA 2007 analyses, our reviews show that the Army has improved its process for determining its force structure requirements and for alleviating force shortfalls. Notwithstanding the problem areas identified in our report, the Army has taken a number of steps to more accurately reflect the Army forces needed to carry out the National Military Strategy of fighting and winning two major-theater wars. The Army has also found ways to make better use of existing resources to minimize war-fighting risks. Table 6 summarizes some of the actions the Army has taken. In addition to the name above, James Mahaffey, Leo Jessup, Ron Leporati, Tim Stone, Jack Edwards, and Susan Woodward made key contributions to this report. Force Structure: Army Support Forces Can Meet Two-Conflict Strategy With Some Risks (GAO/NSIAD-97-66, Feb. 28, 1997). Force Structure: Army’s Efforts to Improve Efficiency of Institutional Forces Have Produced Few Results (GAO/NSIAD-98-65, Feb. 26, 1998). Force Structure: Opportunities for the Army to Reduce Risk in Executing the Military Strategy (GAO/NSIAD-99-47, Mar. 15, 1999). Force Structure: Army Is Integrating Active and Reserve Combat Forces, but Challenges Remain (GAO/NSIAD-00-162, July 18, 2000). Force Structure: Army Lacks Units Needed for Extended Contingency Operations (GAO-01-198, Feb. 15, 2001). | The Army has made progress developing a sound basis for its force structure requirements. It has improved the rigor of its analysis through more realistic scenarios and the integration of Army plans and initiatives. It has also expanded the analysis to include requirements for the entire Army. However, the weaknesses GAO identified suggest that the Army still lacks a sound basis for its institutional force requirements and the forces needed for the strategic reserve, domestic support, and homeland defense. GAO's analysis of the institutional force requirements casts doubt on their accuracy and, by extension, the accuracy of the shortfall that the Army identified in this element. By developing more accurate estimates of institutional forces, this shortfall might be entirely eliminated. A sound basis for requirements is also hampered by the lack of criteria for the strategic reserve, domestic support, and homeland defense element of the Army's force structure. A clearer definition of their missions is needed to accurately estimate the forces that will be required. |
The school lunch and breakfast programs are overseen and administered by USDA through FNS, state agencies, and local SFAs. FNS sets nationwide eligibility and program administration criteria and provides reimbursements to states for each meal served that meets federal menu planning and nutrition requirements and is served to an eligible student. FNS also provides states with commodities based on the number of reimbursable lunches served. States have written agreements with SFAs to administer the meal programs, provide federal reimbursements to SFAs, and oversee SFA compliance with program requirements. SFAs— nonprofit entities responsible for local administration of the school meal programs—plan, prepare, and serve meals to students in schools. SFAs determine the price they charge for school meals, but some children are eligible to receive free or reduced price meals. Specifically, children are eligible for free meals if their families have incomes at or below 130 percent of the federal poverty guidelines and reduced price meals if their families have incomes between 130 and 185 percent of the federal poverty guidelines. SFAs can charge a maximum of $0.40 for a reduced price lunch and $0.30 for a reduced price breakfast. Children who are not eligible for free or reduced price meals pay the entire price charged by the SFA for the meal. SFAs receive federal reimbursements for all meals served to eligible students that meet menu planning and nutritional requirements, regardless of whether children pay for the meals or receive them for free. To receive federal reimbursements for meals, SFAs coordinate with schools to process an individual household application for most children applying for the free and reduced price programs and verify eligibility for at least a sample of households that apply. SFAs also must keep daily track of meals provided. The amount of federal reimbursement that SFAs receive for each meal provided to a child is based on the eligibility category of the child and the meal program. (See table 1.) To be eligible for federal reimbursement, meals served by SFAs must adhere to the Dietary Guidelines for Americans, which include limits on total fat and saturated fat and call for diets moderate in sodium. The meals must also meet standards for the recommended daily allowances of calories, as well as nutrients such as protein, calcium, iron, and vitamins A and C. There are five federally approved food- or nutrient-based menu planning approaches for school meals. For example, under the traditional food-based menu planning approach, SFAs must offer five food items from four food components—meat/meat alternate, vegetables or fruits, grains/breads, and milk—for a lunch to qualify as reimbursable. SFAs choose the specific foods served and how they are prepared and presented. Under the nutrient standard menu planning approach, SFAs use a computer-based menu planning system that uses approved software to automatically analyze the specific nutrient content of planned menu items. USDA policies and regulations establish an oversight and monitoring framework for school meal programs to help ensure accurate meal counting and claiming. (See fig. 1.) Specifically, regulations require data on meals served that qualify for federal reimbursement to be recorded at the point of service in schools and reported from SFAs to states, and states to FNS. Both SFAs and states are required to regularly check meal counts to assess their reliability and reconcile any incorrect counts before submitting meal claim data for federal reimbursement. Federal regulations also require FNS, state agencies, and SFAs to conduct reviews of the school meal programs. FNS regions must conduct management evaluations of each state’s administration of the school meal programs and share evaluation findings with the state. Through the coordinated review effort, states are required to conduct reviews of each SFA’s administration of the lunch program at least once during each 5-year review cycle and share review findings with the SFA and FNS. At the local level, SFAs are required to conduct annual on-site reviews of the meal counting and claiming procedures in each school participating in the lunch program. The school meal programs’ oversight and monitoring requirements are part of their internal controls, which are an integral component of management. Internal control is not one event, but a series of actions and activities that occur on an ongoing basis. Effective internal controls include creating an organizational culture that promotes accountability and the reduction of error, analyzing program operations to identify areas that present the risk of error, making policy and program changes to address the identified risks, and monitoring the results and communicating the lessons learned to support further improvement. To comply with the Improper Payments Information Act of 2002, in November 2007 USDA released the “Access, Participation, Eligibility, and Certification Study” (APEC), which provided the first national measure of improper payments in the school meal programs. APEC estimated th approximately $860 million in improper payments occurred in the school lunch and breakfast programs due to meal counting and claiming errors during school year 2005-2006. Meal counting, or cashier, errors occur when a student’s specific meal selection does not meet the menu planning and nutritional requirements of a reimbursable meal, the SFA’s planned meal components do not meet the menu planning and nutritional requirements of a reimbursable meal, or a cashier incorrectly records the student’s categorical eligibility (i.e., free, reduced price, or paid). Meal claiming, or aggregation, errors generally occur because data on meals served are compiled and totaled by several different entities before they are submitted to FNS as a meal claim for reimbursement. Specifically, aggregation errors occur when the daily meal count totals from the school cafeteria cashiers or points of sale are not summed correctly, school meal count totals are incorrectly reported to or recorded by the school meal count totals are incorrectly reported from the SFA to the state. APEC found that a substantial source of meal counting and claiming errors were cashier errors, particularly for the breakfast program. Concerning aggregation, APEC found that school to SFA meal count reports were the most likely to be erroneous. (See table 2.) However, when this type of aggregation error occurred, APEC found it was typically the case that SFA- reported meal counts were larger than those reported by the school, which resulted in overpayments to the SFA. APEC found that both cashier errors and aggregation errors between the school and SFA were concentrated in a small number of schools that had high error rates. In addition to estimating meal counting and claiming errors, APEC also estimated that approximately $940 million in improper payments occurred in the school meal programs due to certification errors during school year 2005-2006. Certification error occurs when students are certified to receive a level of free or reduced price meal benefits for which they are not eligible or are erroneously denied benefits for which they are eligible. Prior to APEC, the USDA Inspector General’s office conducted reviews of the school meal programs in various school districts nationwide from 2002 to 2007. These reviews often found meal counting and claiming errors in the districts, which resulted in overpayments of federal funds. The reviews also frequently cited deficiencies in internal controls, such as omitted edit checks on meal claims and missing records of meals served, as causes of erroneous meal claims. (For more information on the reports reviewed, see app. I.) Although states conduct program integrity reviews of the meal programs, oversight of the breakfast program is limited. Through the coordinated review effort, states are required to assess meal counting and claiming procedures used in schools when they review each SFA’s administration of the lunch program during the 5-year review cycle. However, only select schools in each SFA are reviewed. While all states reported through our survey that they conduct these reviews, 21 states reported that they do not include the breakfast program in reviews. Although APEC estimated that the percentage of errors in the breakfast program was more than double the percentage of errors in the lunch program, states are not required to review this program (see fig. 3). States, however, are required to review the School Breakfast Program during follow-up reviews. Further, states that include the breakfast program in their reviews do not always systematically review that program. For example, officials in one state reported that they review the breakfast program whenever the administrative review of the SFA will take more than 1 day. Some state officials also reported concerns about the extent to which required SFA on-site reviews effectively identify meal counting and claiming errors. Specifically, SFAs evaluate whether schools’ counting and claiming procedures comply with program requirements during their annual reviews of schools. However, like state reviews, SFAs are not required to review the breakfast program (see fig. 3). Nine states reported through our survey that SFA reviews were slightly or not at all effective in identifying and reducing meal counting and claiming errors. An additional 21 states reported that these reviews were moderately effective at achieving this goal. Although almost all states reported that they provide support to SFAs on completing annual on-site reviews, such as providing a form to document reviews, states also reported some factors that impede the quality of these reviews. For example, 20 states reported through our survey that some SFA reviewers lack the knowledge necessary to properly evaluate the program or consider on-site reviews to be a paperwork exercise instead of a monitoring tool. SFA on-site reviews are designed as self-assessments, and a few states reported through our survey that it is difficult for SFAs to review their own schools in an objective manner. At one large SFA we visited that serves over 100 schools, SFA reviews of schools were conducted, but the reviewers did not identify an issue causing erroneous meal claims that was identified in the state review completed shortly thereafter. The state determined that the resulting erroneous meal claims found during its reviews of the SFA totaled over $150,000. Officials from a small SFA we visited that serves six schools said they believe their on-site reviews are effective at identifying errors, but they also acknowledged that the problems identified through the most recent state administrative review had not been found during their on-site reviews. Specifically, the state review found that this SFA had submitted erroneous meal claims resulting in its receipt of a $6,200 overpayment of federal program funds. The “Standards for Internal Control in the Federal Government” states that key duties or responsibilities should be divided among different people to reduce the risk of error. However, the evidence obtained from some of the SFAs we visited suggests the self-assessment design of on-site reviews may be limiting their effectiveness. When state and SFA reviews identify meal counting and claiming errors, these problems are not always resolved. Several of the SFAs we visited had the same errors identified during consecutive state and SFA reviews. During successive reviews in two of the SFAs we visited, cashiers were counting meals that did not meet federal requirements to be reimbursable. For example, one SFA director found that three of the five cashiers in a school he was observing could not accurately identify the meal components that made up a reimbursable meal on the day of his on-site review, which was an error identified in the previous state administrative review. In two other SFAs, successive state reviews found claiming errors, which impacted the accuracy of meal claims. In all of these SFAs, after errors were found during the first review, corrective actions were prescribed that should have modified procedures to reduce errors. One SFA official told us that the repeat meal counting and claiming errors found in multiple state administrative reviews did not surprise him, as he had found similar errors during his annual on-site reviews, and the corrective actions his SFA took had been ineffective. States and SFAs identified several factors that hinder efforts to address meal counting and claiming errors. Staff turnover: Nineteen states reported through our survey that staff turnover affects whether corrective actions permanently resolve errors. In addition, some state and SFA officials we interviewed told us that the frequency of SFA staff turnover results in a continued need to retrain staff on accurate procedures. Competing demands: Over 40 percent of states reported through our survey that competing demands for cafeteria staff greatly or moderately hinder efforts to address meal counting and claiming errors. During our site visits, several state officials told us that cafeteria staff sometimes fulfill additional roles in schools, such as bus drivers or school secretaries, which can affect their ability to focus on fulfilling meal program requirements and modifying procedures to address errors. Inadequate training: Some state and SFA officials said that inadequate training of SFA staff affects whether corrective actions resolve errors. While nearly all the SFAs we interviewed conduct training, some officials acknowledged that certain aspects of the school meal programs are sufficiently complicated that more training may be needed. Specifically, state administrative reviews of almost half the SFAs we visited found cafeteria staff incorrectly identifying reimbursable meals, and some state officials we interviewed told us that different types of menu planning approaches can make this difficult for cafeteria staff. In addition, officials from four SFAs we visited told us that adding options to menus can make it more difficult for staff to identify a reimbursable meal. While the five menu planning approaches offer SFAs flexibility and providing several menu options may appeal to students, both of these factors complicate cashier efforts to accurately count reimbursable meals. Point of sale systems: Some state and SFA officials we interviewed told us that the lack of an automated point of sale system in schools, through which cafeteria staff count meals served to children each day, hinders SFA efforts to address errors. Specifically, most state officials we interviewed indicated that having an automated point of sale system, or computer, for cashiers to identify children receiving meals, their eligibility for free or reduced price meals, and components on each child’s tray reduces the likelihood of errors. However, some SFAs said that resource constraints had prohibited them from purchasing these automated systems. While these systems may help reduce counting and claiming errors, half of the state officials we interviewed indicated that point of sale systems can contribute to errors when staff are not properly trained on how to use the system or the system software is not properly set up or tested. Specific school policies: According to some SFA officials we interviewed, certain types of school policies can complicate cashier efforts to address meal counting problems. For example, some schools have policies that children will be served a meal regardless of their ability to pay. While such children receive a meal for free, they are not necessarily eligible for a free reimbursable meal based on family income. However, cashiers sometimes do not understand this distinction and count these as free reimbursable meals. Similarly, school policies that shorten school meal periods sometimes also contribute to cashier errors. A few state and SFA officials reported that while shorter meal periods increase academic instruction time, they also require cafeteria workers to provide meals to children more quickly, which can result in meal counting errors. In one review of an SFA we visited, the state reported that the rapid flow of students through the lunch lines was affecting the ability of cafeteria staff to assess whether all meals were complete. Ineffective school support: A lack of effective support from school staff was also reported by some SFAs as hindering efforts to permanently address meal counting and claiming errors. For example, one SFA official reported that a school official had changed the school’s counting and claiming system without consulting the SFA, which caused related errors. Officials from another SFA reported that they now employ most of the cafeteria staff in their schools because of the difficulty in getting changes made to meal service when school administrators employ these staff. During our site visits, we observed that the involvement of school staff, such as teachers, in meal service may affect errors. For example, in a few schools, meals served were counted by teachers in their classrooms instead of by staff in the cafeteria. In at least one of these schools, the counting procedure used by the teacher produced errors. In another school, a teacher provided all of her students’ identification cards to the cashier to indicate the students were eating lunch, but not all of those students were present that day—a procedural error that had been cited on a previous state review of this school. In addition, states’ infrequent use of certain program sanctions may also affect the priority SFAs give to addressing errors. While federal regulations require states to withhold meal program funds from SFAs for certain program violations, such as not completing prescribed corrective actions within agreed-upon time frames, administrative review data suggests that states withhold funds from few of the SFAs reviewed. Further, only four states reported through our survey that they had terminated an SFA from the school meal programs during the past 5 years because of meal counting or claiming errors. States likely consider multiple factors when deciding whether to use the federally allowed sanctions, such as the fiscal effect of withholding program funds on an SFA’s ability to provide meals to children, which may influence the frequency with which these are used. An official in one of the states we visited said that his state prefers to work with SFAs to correct problems rather than terminate their participation in the meal programs. Many state officials do not believe that meal counting and claiming errors are significant, and these views may also affect efforts to address errors. According to the “Standards for Internal Control in the Federal Government,” the attitude and philosophy of management toward monitoring can have a profound effect on internal control. Although the APEC study found that meal counting and claiming errors were a significant source of improper payments in the school meal programs, state officials reported through our survey that they are rare. Specifically, 34 states reported that meal claiming errors and 26 states reported that meal counting errors were seldom or never a problem within their SFAs. Further, one state official reported through our survey that this is a problem made up by USDA, as very few of these errors occur. However, state administrative review data suggests that meal counting and claiming errors have occurred in SFAs and schools nationwide. In 2008, USDA released an updated form for states to use when conducting state administrative reviews through the coordinated review effort. According to USDA officials, the form was updated to address recent legislative and regulatory changes. The updates to the form included new questions related to certification and food safety, as well as some minor revisions to existing questions. For example, some of these revisions added descriptive details related to the review of meal counting and claiming procedures. Also in 2008, USDA held related training that focused on the entire review process, including meal counting and claiming procedures, as well as particular areas that states had reported a need for additional training. USDA officials reported that reviewers from almost all states attended the training. Officials also reported that they are developing updated policy guidance for state administrative reviews, which will address both the new form and issues that have arisen since the last guidance was published in 1993. In another effort to strengthen the state administrative review process, USDA issued a memo in March 2008 that directed states to stop conducting practice reviews. Prior to issuing the memo, USDA officials became aware that some states were conducting practice reviews to reduce documented findings and required corrective actions. USDA’s memo stated that because practice reviews only temporarily reduce the likelihood of documented review findings, they undermine the integrity of the review process, diminish the importance of adhering to school meal program requirements, and are in direct conflict with federal review requirements. While the memo indicated that states should stop conducting practice reviews immediately, USDA officials said they do not know if all states have stopped this activity. Since fiscal year 2005, USDA has also provided annual grants to, in part, support state efforts to conduct additional reviews of meal counting and claiming and certification procedures in SFAs that have a high level of, or high risk for, administrative error in the school meal programs. In an effort to increase state use of these Administrative Reviews and Training Grants, USDA simplified the application process for the fiscal year 2009 cycle, through which $16 million in grant funds were available. Specifically, the streamlined application requirements allowed states to submit a 1-page form to apply for up to $3,500 per SFA review, as well as submit requests for multiple SFA reviews on one form. In May 2009, USDA awarded approximately $300,000 total in administrative review grants to the eight states that applied for them, a number equal to the greatest number of states that had received these grants in prior years. Other recent USDA efforts may also help identify and address meal counting and claiming errors in the school meal programs. In 2007 and 2008, USDA issued updated guidance on complying with federal menu planning and nutritional requirements for school meals, as part of the School Meals Initiative. Through this initiative, states are required to conduct reviews of SFAs to determine their compliance with these requirements. USDA officials reported that these reviews can be helpful in identifying and addressing meal counting errors, as reviewers observe children’s meals at the point of sale. In addition, USDA is currently working with the National Food Service Management Institute to develop additional technical assistance materials for SFAs related to planning and recognizing reimbursable meals. These materials are intended to help food service staff plan meals that make it easier for students to choose a reimbursable meal and cashiers to confirm that a reimbursable meal has been selected. USDA’s oversight efforts have not directly focused on identifying or addressing meal counting and claiming errors. While FNS regional offices conduct a management evaluation of each state’s oversight of the school meal programs, these evaluations do not directly focus on identifying and addressing meal counting and claiming errors. Although USDA’s annual guidance on management evaluations indicates that regions should examine findings from some state administrative reviews of SFAs, it does not specify meal counting and claiming procedures as an area to focus on. Officials we spoke to in six of the seven FNS regional offices stated that management evaluations are generally ineffective at providing information on meal counting and claiming errors, in part because they are structured to focus more generally on state administration of the programs. Officials in some of the regional offices could not provide us with information on the extent of meal counting and claiming errors in the states they oversee. In addition, while regional offices submit management evaluation reports to USDA headquarters when they are completed, headquarters officials said that they do not currently analyze these reports to develop national- or regional-level themes and trends. Finally, USDA has not updated its manual on meal counting and claiming procedures since it was originally published in 1991, though some states reported through our survey that an updated federal meal counting and claiming manual would assist their efforts. A USDA official reported that the manual was published during initial implementation of the coordinated review effort. While USDA recently updated the forms and instructions related to that effort, this manual has not been updated, nor was it available on USDA’s Web site at the time of our review. In contrast, USDA’s efforts have focused on addressing school meal program errors related to the certification of children as eligible for free and reduced price meals. Specifically, federal guidance for FNS regional offices’ management evaluations directs the regions to review state efforts to improve the accuracy of information used for certification. In January 2008, USDA also issued an updated manual on certification. In addition, USDA worked with Congress to ensure that the Child Nutrition and WIC Reauthorization Act of 2004 included multiple changes to school meal programs to help address certification problems. For example, the act simplified the certification process by requiring a single application for all eligible children in the household and eligibility determinations to be in effect for the entire school year. One FNS regional official suggested that the approach USDA took to address certification errors nationally may be a model to address meal counting and claiming errors. USDA headquarters’ officials acknowledged that, in the past, the agency considered certification to be the primary source of improper payments in the school meal programs, and a few officials in headquarters and the regions said that they were surprised by the APEC findings on the extent of meal counting and claiming errors. However, headquarters officials also said that the agency has recognized for many years that both erroneous meal counting and claiming and certification procedures cause improper payments. Before the APEC study findings on improper payments in the school meal programs were released, USDA’s Inspector General issued multiple reports on administration of these programs in selected school districts that found problems with meal counting and claiming procedures. For example, many of the reports issued from 2002 to 2007 found problems with SFA annual on-site reviews and edit checks performed on meal claims. Many of these reports also found that meal counting and claiming errors resulted in overpayments of federal funds. (For more information on the reports reviewed, see app. I.) USDA also collects annual data on findings from state administrative reviews of SFAs, but it does not use these data to assess meal counting and claiming errors. The “Standards for Internal Control in the Federal Government” states that agencies should monitor performance measures and indicators, which may be accomplished by assessing data, to determine appropriate actions to be taken. However, a USDA official said that the state review data are not used systematically for oversight purposes and are instead used periodically to provide information for agency publications and answer questions related to state reviews. While, in the past, USDA analyzed these data for trends and error-prone areas, officials said they have not done so for several years, in part due to resource constraints. These data include several pieces of information about meal counting and claiming errors in SFAs reviewed by states, such as the number of lunches observed that were erroneously counted as reimbursable because they did not meet federal menu planning and nutrition requirements and the value of over-claims resulting from meal counting and claiming errors. As a result, these data provide general information on the frequency with which meal counting and claiming errors are occurring in states. However, because states are not required to identify the SFAs reviewed in each year, and states are only required to review each SFA once during each 5-year review cycle, these data are also limited in their ability to provide information on specific SFAs with errors. Further, because states are not required to conduct administrative reviews of the School Breakfast Program, these data lack information about breakfast program errors. In the current economic environment, as increased numbers of families struggle to stay financially afloat and more children qualify for free or reduced-price meals, it is of even greater importance that federal dollars be effectively spent to meet the school meal programs’ goal of providing nutritious meals to children in schools. The APEC study’s estimate of $860 million in improper payments resulting from meal counting and claiming errors provided new information about weaknesses in the school meal programs and successfully pinpointed areas, such as the breakfast program, that are particularly vulnerable. However, this information has not yet been fully utilized to modify program oversight at the federal, state, and local levels in order to improve efforts to identify and address errors. Although the federally required oversight and monitoring processes for the school meal programs are designed to, in part, identify and address meal counting and claiming errors, gaps in these processes limit their strength as an internal control. The absence of a requirement to include the breakfast program in state and SFA reviews, as well as ineffective SFA annual reviews of schools, impede program monitoring efforts and leave the government vulnerable to continued erroneous payments. Further, outdated federal manuals and guidance may hinder SFA efforts to design or implement effective counting and claiming procedures, which also leaves the government vulnerable to continued erroneous payments. At the federal level, while the lack of data on both specific SFAs reviewed each year and errors in the breakfast program hinder USDA’s oversight ability, the agency is also missing an opportunity to use the data they already collect to identify states with significant counting and claiming errors and target assistance to areas of greatest risk. Finally, until officials who administer the school meal programs focus their attention on meal counting and claiming errors, it is unlikely that needed improvements will occur. While meal counting and claiming errors are often the result of basic human errors, such as the inaccurate addition of meal counts or incorrectly counting a meal as reimbursable because children are moving too quickly through the lunch line, holding states and SFAs accountable for implementing corrective actions can help minimize error frequency. To help states and SFAs improve their ability to identify and address meal counting and claiming errors, we recommend that the Secretary of Agriculture take the following actions: Require states to include the School Breakfast Program in their state administrative reviews of SFAs and require SFAs to include this program in their annual on-site reviews. Update the 1991 USDA manual on meal counting and claiming procedures to ensure that current guidance is reflected. Develop additional guidance and technical assistance for federally- required SFA annual on-site reviews. For example, USDA, through its Web site, could provide a model form to be used for on-site reviews that indicates the aspects of meal counting and claiming procedures to review, or the Department could work through the National Food Service Management Institute or another organization to provide SFAs with technical assistance aimed at improving the quality of on-site reviews. Explore the feasibility of requiring SFAs to conduct third-party annual on- site reviews to ensure independence. In addition, to assist federal efforts to target resources to states and SFAs at the greatest risk for these errors, we recommend that the Secretary of Agriculture: Develop procedures for using state administrative review data reported to FNS to assess risks and target oversight efforts associated with meal counting and claiming errors, and modify the FNS form on which states report the data so that it includes identification of which SFAs were reviewed each year and information from School Breakfast Program reviews. We provided a draft of this report to USDA for review and comment. In oral comments, USDA officials concurred with our recommendations. Officials also provided technical comments, which we incorporated into the report as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to relevant congressional committees, the Secretary of Agriculture, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or brownke@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. To obtain information on federal, state, and local efforts to identify and reduce meal counting and claiming errors in the school meal programs, we used several methods. We reviewed pertinent federal laws and regulations, agency guidance, studies, and data, as well as interviewed U.S. Department of Agriculture (USDA), Food and Nutrition Service (FNS) officials in headquarters and all seven regional offices. We also conducted a Web survey of all states and site visits to six states and Washington, D.C. To obtain additional background information, we interviewed staff at the School Nutrition Association and the National Food Service Management Institute, and to obtain additional information on automated point of sale systems, we interviewed staff from two vendors of these systems— MealTime/CLM Group and School-Link Technologies. We conducted this performance audit from August 2008 to September 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To provide context for our analysis of actions taken to identify and address meal counting and claiming errors, we reviewed information on these errors from FNS’s “Access, Participation, Eligibility, and Certification Study,” which was published in 2007. This study provided the first national picture of improper payments in the National School Lunch Program and School Breakfast Program related to cashier/meal counting errors and aggregation/meal claiming errors. We determined th these data are sufficiently reliable for the purposes of our review. Reviews of the school meal programs conducted by the USDA Inspector General’s office also provided context on meal counting and claiming errors. As a result, we reviewed Inspector General reports issued from 2002 to 2007 that addressed administration of the school meal programs in selected school districts nationwide. Because of our interest in meal counting and claiming errors generally, we did not review Inspector General reports that specifically examined food service management companies’ involvement in administration of the school meal programs. We reviewed reports addressing school districts in the following cities: Milwaukee, Wisconsin; Liberal, Kansas; Chicago, Illinois; Philadelphia, Pennsylvania; Kearney, Missouri; Platte City, Missouri; Leavenworth, Kansas; Bellwood, Illinois; Girard, Kansas; Effingham, Kansas; and New York City, New York. We also interviewed an official from the Inspector General’s office to gather background information on the objectives, scope, and methodology for these reviews. To gather additional information on meal counting and claiming errors nationwide, we reviewed USDA FNS headquarters’ data on state administrative review findings for school years 1998-1999 through 2002- 2003. States report these data annually to FNS on the FNS-640 form, and FNS compiles datasets corresponding with the 5-year review cycles. We reviewed data from the most recently completed 5-year cycle for which full data was available at the time of our analysis and interviewed FNS officials to gather additional information about the data. Although these data have limitations, including that states do not always review all school food authorities (SFA) providing the school meal programs nationwide within each 5-year cycle, we determined that they are sufficiently reliable for the limited purposes of our review. To obtain national information on state efforts to identify and address counting and claiming errors in the school meal programs, we conducted a Web survey of state child nutrition program directors in all 50 states and the District of Columbia between February and March 2009. All of the state child nutrition program directors responded to the survey. The survey included questions about the extent to which states have identified meal counting and claiming errors within their SFAs, state and SFA processes to identify and reduce errors, related state assistance provided to SFAs, challenges states and SFAs have experienced in identifying and reducing errors, and support provided by FNS to help states and SFAs address errors. The practical difficulties of conducting any survey may introduce errors, such as variations in how respondents interpret questions and their willingness to offer accurate responses. We took several steps to minimize these errors, including pretesting draft instruments and using a Web-based administration system. Specifically, we pretested draft instruments with state child nutrition program directors from four states (Maryland, Mississippi, New Mexico, and New York) in January 2009. We selected pretest states based on geographical disparity and the large percentage of students eligible for free and reduced price school meals in those states. We also considered recommendations from FNS regional offices. In the pretests we inquired about clarity, precision, and objectivity of the questions, in addition to flow and layout of the survey. We revised the final survey based on pretest results. Another step we took to minimize errors was using a Web-based survey. By allowing respondents to enter their responses directly into an electronic instrument, this method automatically created a record for each respondent in a data file and eliminated the need for and the errors associated with a manual data entry process. To further minimize errors, programs used to analyze the survey data and make estimations were independently verified to ensure accuracy. While we did not validate specific information that states reported through our survey, we took several steps to ensure that the information was sufficiently reliable for the purposes of this report. For example, during pre-testing, we took steps to ensure definitions and terms used in the survey were clear and familiar to the respondents, categories provided in closed-ended questions were complete and exclusive, and the ordering of survey sections and the questions within each section were appropriate. Further, after states completed the survey, we reviewed the responses, identified those that required further clarification, and conducted follow- up interviews with respondents to ensure the information they provided was reasonable. In our review of the data, we also identified and logically fixed skip pattern errors—questions that respondents should have skipped but did not. On the basis of these checks, we believe our survey data are sufficiently reliable for the purposes of our work. To gather additional information on state and local efforts to identify and address counting and claiming errors in the school meal programs, we conducted site visits to states in six of the seven FNS regions from September 2008 to March 2009. We visited one state (California, Illinois, Massachusetts, Mississippi, and Texas) in each of the regions, except in the Mid Atlantic Region, where we visited both Washington, D.C., and Maryland. States selected provided geographic variation and had both high levels of school meal errors found during state administrative reviews and relatively high percentages of students eligible for free and reduced-price meals. We also considered recommendations of the relevant FNS regional offices when selecting states to visit. In addition, during site selection, we interviewed child nutrition program officials from Florida, Indiana, and New York to better understand state issues related to meal counting and claiming. During each site visit, we interviewed state-level child nutrition program directors, as well as officials from two to three SFAs that either had experienced significant meal counting and claiming errors or had systems in place that were considered effective at identifying and reducing such errors. The 15 SFAs selected also provided some variation in school district type, including public, private, and charter. Through interviews with state and SFA officials, we collected information on actions taken and policies in place to identify and address meal counting and claiming errors and the types of challenges states and SFAs face in their efforts to do so. For each SFA, we also reviewed recent state review findings on meal counting and claiming errors and observed school meal procedures in one or more schools. We cannot generalize our findings beyond the states and SFAs we visited. Blake Ainsworth and Jeremy Cox, Assistant Directors; Rachel Frisk and Anjali Tekchandani, Analysts-in-Charge; Kris Trueblood; Joanie Lofgren; Walter Vance; Joanna Chan; Jim Rebbe; Kate van Gelder; and Mimi Nguyen also made significant contributions to this report. | In fiscal year 2008, the National School Lunch Program and School Breakfast Program provided meals to 30.9 million and 10.5 million children, respectively. Recently, the U.S. Department of Agriculture (USDA) issued the first estimate of improper payments due to meal counting and claiming errors in these programs, which was approximately $860 million (8.6 percent of federal program reimbursements) in school year 2005-2006. These errors include: (1) cashier errors, such as those made in determining if a meal meets the federal menu planning and nutrition requirements (meal counting), and (2) aggregation errors made when officials count and total meals for federal reimbursement (meal claiming). GAO was asked to review (1) actions taken by states and school food authorities (SFA) to identify and address meal counting and claiming errors; and (2) actions taken by USDA to help states and SFAs identify and address meal counting and claiming errors. GAO's steps included analyzing data on state administrative reviews of SFAs; surveying all states; conducting site visits; and interviewing federal, state, and SFA officials. Although states and SFAs conduct program integrity reviews of the school meal programs, gaps in federal requirements for these reviews limit their effectiveness at identifying meal counting and claiming errors. States and SFAs are generally not required to review the School Breakfast Program, and 21 states reported through GAO's survey that they do not review the breakfast program. However, USDA estimates that the percentage of meal counting and claiming errors is higher in the breakfast program than the lunch program. Further, some states reported that SFA reviews of the meal programs are ineffective at identifying and reducing errors, which may be due, in part, to the self-assessment design of these reviews. When state and SFA reviews identify errors, meal counting and claiming errors persist. For example, in several SFAs that GAO visited, the same errors were identified during consecutive reviews. States and SFAs identified multiple factors that hinder efforts to address these errors, such as staff turnover, inadequate training, and school policies that complicate meal service. USDA has taken some actions to improve state reviews of SFAs, but it has not directly focused on oversight of meal counting and claiming. USDA recently provided new review forms and nationwide training to strengthen state reviews and also simplified the application process for state grants to conduct additional reviews of SFAs. However, USDA has not targeted its oversight efforts to identify or address meal counting and claiming errors. For example, USDA regional offices' reviews of state administration of the school meal programs do not focus on these errors, and some regional officials could not provide information on the extent of these errors in the states they oversee. USDA also has not updated its meal counting and claiming manual since it was first issued in 1991. Further, while USDA collects annual data on findings from state reviews of SFAs, the agency has not used these data for oversight purposes or to assess risks associated with meal counting and claiming errors. |
The military has a system of pay grades and ranks that differs by military service. According to DOD, military rank is a badge of leadership, and responsibility for personnel, equipment, and mission grows with each increase in rank. Pay grades, such as E1 or O5, are administrative classifications used primarily to standardize compensation across the military services. For example, the “E” in E1 stands for “enlisted” while the “1” indicates the pay grade for that position. The other pay categories are “W” for warrant officers and “O” for commissioned officers. Some enlisted pay grades have two ranks. Figure 1 provides junior enlisted pay grades and ranks for each of the military services. DOD offers a wide range of benefits, many of which are directed at members living on installations, including junior enlisted servicemembers, and those with family obligations. These benefits include the availability of the following services and programs on most installations where servicemembers are stationed: Commissaries: DOD operates supermarket-type stores called commissaries that provide a noncash benefit for active-duty servicemembers by offering food and related household and health and beauty items that are similar to merchandise sold in commercial grocery stores. This merchandise is typically offered for sale at substantially reduced prices (including exemption from any sales taxes) when compared to retail prices at commercial grocery stores. DOD estimates that a family of four can save about $4,400 annually (or approximately 30 percent) by shopping at a commissary if all food purchases are made from the commissary. By law, commissaries sell items at cost plus a 5 percent surcharge, which is used to pay for the recapitalization of store-related infrastructure, including replacement, expansion, and improvement of existing commissaries and central product-processing facilities; maintenance and repair; and store- related information technology. Exchanges: The military services’ exchange stores offer savings on shopping to servicemembers. The exchanges run department stores, uniform shops, gas stations, liquor stores, barber shops, fast-food restaurants, and many other retail operations on military installations, as well as online shopping. Active-duty service members, National Guard and Reserve Component members, retirees, and eligible family members can shop at any exchange. A portion of exchange profits funds installation Morale, Welfare, and Recreation (MWR) activities, and exchanges may provide employment for military family members. Each service branch’s exchange system has its own name—Army and Air Force Exchange Service, Navy Exchange Service Command, and Marine Corps Exchange—and although each military exchange system has similar policies, services and items for sale, each one is operated separately. Medical Care: DOD operates its own large, complex health system— the Military Health System—to provide a full range of medical care and services at no cost to active-duty military servicemembers and at either a reduced cost or no cost to other eligible beneficiaries— including dependents of servicemembers and some military retirees. Servicemembers obtain health care through the military services’ system of military treatment facilities, which is supplemented by participating civilian health-care providers, institutions, and pharmacies to facilitate access to health-care services when necessary. Active-duty servicemembers receive most of their care from military treatment facilities, where they are supposed to receive priority access over other beneficiaries, such as dependents and retirees. Food Services: DOD operates dining facilities—also called DFACs, mess halls, or galleys depending on the military service—on military installations to meet the feeding and sustenance needs of servicemembers who live or work on its installations. These dining facilities may also provide a structured on-the-job training environment for food service personnel to meet the department’s warfighting mission. Servicemembers who live in on-base housing and do not receive basic allowance for subsistence are eligible to receive meals at the government’s expense. Morale, Welfare, and Recreation: The department’s MWR programs intend to provide high-quality, consistent community support. These programs are classified into three categories that determine how they are funded: (1) Category A programs are mission-essential programs, funded almost entirely with appropriated funds, and include fitness, sports, libraries, single servicemember programs and deployment support; (2) Category B programs are community-support programs, funded significantly with appropriated funds, and include outdoor recreation, recreation centers, leisure tours and travel, auto hobby, child and youth development programs and skill development programs; (3) Category C programs are revenue generators, funded almost entirely with nonappropriated funds, and include food, beverage, entertainment, military clubs, golf courses, bowling centers, marinas, and gaming machines. Unlimited use of MWR programs and services is authorized to active-duty servicemembers and their families, among others. DOD has policies and procedures at multiple levels—DOD, the military services, and selected installations we visited—that govern servicemember access to on-base services and programs, which includes access for junior enlisted servicemembers. More specifically, we found that these policies address eligibility, which makes the on-base services and programs available to all servicemembers. However, at selected installations, we found that budget considerations and utilization rates of the services and programs influenced decisions about implementation of these policies and procedures and affected all servicemembers, including junior enlisted servicemembers. DOD has policies and procedures at the department, military service, and installation levels that govern on-base services and programs, including access, and these typically apply to all servicemembers, not just those in the junior enlisted ranks. Specifically, policy-making authority resides initially at the department level in the Office of the Secretary of Defense (OSD) with policies for specific services or programs primarily issued in the form of DOD Instructions or DOD Directives. The military services issue service-specific policy based on applicable DOD level policy. Additional policy can be released at every level within the service, each of which provides greater granularity but is based on policy from a higher level. At the installation level, commanders make management decisions about the services and programs that are available on their installations, within the requirements framework derived from higher-level policies. At the installations we visited, we found that commanders who have decision-making authority for their respective installations further delegate responsibility for certain managerial decisions and implementation about certain procedures to program-level managers who possess more direct knowledge of the programs or services being provided. For example, we found that the installation commander at one installation delegated decisions about the base’s recreation centers’ hours of operation and that of other programs to the MWR Director because the installation commander felt that the Director could make more informed decisions based on utilization rates and other data gathered by the MWR staff. Our analysis of policies from multiple levels within the department— including OSD, the military services, and the installations we visited— found that these documents addressed servicemember eligibility and make the on-base services and programs available to all servicemembers. In most cases, the policies we analyzed referenced either the entire enlisted population or the entire installation’s population, and did not distinguish between specific groups—including, for instance, rank or gender, among other things. For example, Defense Health Agency policy regarding health care includes provisions for all active-duty servicemembers, of which junior enlisted servicemembers are a subset, as part of a priority system for access to care. The policy specifies that an active-duty servicemember is in the first priority group for receiving care at military treatment facilities and clinics, as well as time frames in which active-duty servicemembers should be able to make appointments and receive care. The policy does not single out or include any special provisions for providing access to care specifically for junior enlisted servicemembers. As another example, Army and Air Force policy on exchange-service operations lists eligible patrons who are authorized access to merchandise and services at exchange stores. According to the policy, uniformed or retired uniformed servicemembers, either on active duty or serving in any category of the Reserve Component, are entitled to unlimited exchange service benefits. This includes all members of the Army, the Navy, the Marine Corps, and the Air Force. Thus, the policy does not single out or include any special provisions for providing access specifically to junior enlisted servicemembers. Officials at the military service headquarters and installation level stated that on-base services generally should be available to all active-duty servicemembers on an equal basis. We did find, however, that each of the individual military services have programs geared to single and unaccompanied active-duty servicemembers in the 18- to 25-year old age range, which largely encompasses the junior enlisted ranks. Specifically, the military services’ programs include (1) the Better Opportunities for Single Soldiers Program (Army), (2) the Liberty Program (Navy), (3) the Single Marine Program (Marine Corps), and (4) the Single Airman Program (Air Force). These programs are intended to address single servicemember quality- of-life issues and support commanders by providing a forum through which single servicemember quality-of-life concerns may be identified and recommendations for improvement may be made. Additionally, these programs are intended to connect single and unaccompanied servicemembers with opportunities for off-duty programs, activities, and special events designed to promote positive use of leisure time. Activities and events offered by the programs vary from installation to installation based on the participants’ interests, but they typically include holiday and special event activities, recreation and sports activities, trips and tours, concerts, life skills and career progression, and community involvement activities. Although the programs, policies, and procedures do not focus exclusively on junior enlisted servicemembers, according to program officials, participants at the activities and events offered by the programs are typically junior enlisted servicemembers. At the installations we visited, we found that policy decisions and implementation related to on-base services and programs directly affect access by all servicemembers—including junior enlisted—and were influenced by factors such as available budgetary resources and the utilization rates of services or programs. For example, officials at all four installations we visited reported that budget cuts and the effect of sequestration have, for all servicemembers, either diminished the installation’s ability (1) to provide services and programs or (2) to provide services and programs at a level that meets the current need. For example, at Joint Base San Antonio, officials said that civilian furloughs and sequestration significantly affected the delivery of medical services to all of its servicemembers at two of the three bases comprising Joint Base San Antonio. More specifically, according to a Joint Base San Antonio official, 30 of the 37 medical wing clinics furloughed civilians and experienced reductions in appointments or delays when patients sought treatment at the clinics during the furlough period. According to information provided by medical officials representing Joint Base San Antonio, the Joint Base San Antonio-Randolph Air Force Base clinic had to reduce the number of available appointments by 54 slots each week. In addition, wait times for pharmacy and mammography services increased by 20 percent. The officials further stated that, at the Joint Base San Antonio-Lackland Air Force Base Trainee Health Clinic, appointment slots were reduced by 30 appointment slots a week, while appointments at the installation’s ambulatory surgery center were reduced by 540 appointments a week (nearly 10 percent). The Joint Base San Antonio officials stated that these reductions and delays did not solely affect junior enlisted servicemembers, but rather all beneficiaries seeking care; the extent to which each population (active duty, dependents, and retirees) was affected is unknown as the delayed appointments were not tracked in this manner. Similarly, an official at Naval Station Norfolk stated that the combined effect of the continuing resolution in fiscal years 2013 and 2014 and sequestration resulted in reductions to the hours of operation for some of the installation’s MWR services and programs. For example, the official stated that the installation’s fitness center hours of operation were reduced by 3 hours on Saturday mornings and 19 group exercise classes were cancelled. In addition, the installation’s Liberty Program recreation centers’ hours of operation were reduced from 84 to 72 hours a week at one center and from 66 to 57 hours a week at the other center—a reduction of 12 and 9 hours a week, respectively. Further, the official stated that other services and programs at Naval Station Norfolk were closed due to underutilization of the service or program. For example, the official stated that the combined Arts and Crafts and Wood Shop was closed in 2010 and the recreation pool was closed in 2013 due to low patronage. Similarly, officials at Camp Lejeune said the Wood Hobby Shop closed in 2014 due to low utilization. At Fort Campbell, the Army and Air Force Exchange Service General Manager stated that he makes management decisions, for instance the hours of operation for the installation’s over 32 exchange facilities and 32 service facilities—including the main exchange, express stores/gas stations, military clothing store, and the Class Six store—based on sales and troop deployments. The exchange manager further stated that Fort Campbell used to have an express store that was open 24 hours a day, but few patrons and low sales in the early morning hours did not warrant keeping it open 24 hours a day. As a result, the store’s hours were reduced from 24 hours a day to opening from 5:00 a.m. to midnight. Also, he said exchange stores may close early when troops are deployed since there are fewer customers and stay open later when they return from deployment. Junior and senior enlisted servicemembers from selected installations we visited expressed a wide range of perceptions regarding access to on- base services and programs and indicated that, in some cases, access is a problem. DOD and the military service data-collection mechanisms and resultant data—including surveys, utilization rates of services, and other means of providing feedback—do not fully capture potential access issues associated with on-base services and programs, including those identified in our discussion groups. DOD also has methods for collecting and sharing information on initiatives and other identified good practices across the department, but these efforts have a broader purpose and do not specifically focus on junior enlisted access issues. During our visits to four installations, the participants in our discussion groups provided a range of comments—with some being positive, but a majority being negative—about the services and programs on their installations. For example, junior enlisted servicemembers in 1 of our 11 discussion groups expressed interest in staying on the installation and using some of the services and programs available to them on the installation because of their convenience and relatively low cost, but added that some of these services had recently been closed or had their availability reduced. Although the focus of our discussion groups and the questions we asked pertained to junior enlisted servicemembers, we also heard similar concerns from participants in the senior enlisted discussion groups. Senior enlisted participants also stated that issues may be more prevalent among junior enlisted servicemembers based on the potential that their lower rank may not garner them the attention needed when they try to receive assistance from services and programs; however, senior enlisted participants stated that they do take active roles in assisting junior enlisted servicemembers in addressing issues. We categorized the comments that junior and senior enlisted servicemembers provided in our discussion groups into 13 main categories related to on-base services and programs and other feedback mechanisms to installation leadership. Although the participants in our groups provided some limited positive comments about the services and programs, based on our analysis of the discussion group comments, we identified specific areas where junior and senior enlisted servicemembers in our discussion groups most frequently expressed concerns about access issues. Those areas include: (1) dining facilities, (2) medical care, and (3) transportation. We provide a brief summary of the concerns identified by servicemembers below. Additional examples of these concerns can be found in appendix II. Table 1 depicts the number of discussion groups where each of our main categories was discussed. Participants in all 11 junior enlisted and all 6 senior enlisted servicemember discussion groups at all four installations we visited made comments pertaining to the dining facilities on the installation. Further, junior enlisted servicemembers in 10 of 11 discussion groups and senior enlisted servicemembers in 6 of 6 discussion groups stated they had concerns about (1) access to meals and dining facilities—to include examples such as parking, distance to the facility, or dining facility closures—and (2) the hours of operation of the dining facilities, among other things. For example, participants in one junior enlisted discussion group raised concerns about the hours of operation for that installation’s only dining facility, with one individual stating that work hours had to be adjusted to account for the dining facility’s schedule to allow time for lunch. The servicemember further stated that if work ran late, he had to rush to the dining facility to get dinner before it closed at 6:30 p.m. However, leadership at the installations we visited stated there are accommodations made to facilitate access—for example, use of unit vehicles to transport servicemembers who do not have personal vehicles—and address evolving environments and installation demographics. In all 11 discussion groups with junior enlisted servicemembers and in all 6 discussion groups with senior enlisted servicemembers, participants provided comments about medical care. While we did hear some positive comments from junior enlisted servicemembers—for example, two of our junior enlisted discussion groups stated that the medical treatment facility at their installation provided great access to medical care—we heard, among others things, concerns about challenges with making medical appointments, long wait times for acute care, and lengthy waits to obtain referrals or specialty appointments, even though DOD’s policy is to provide active-duty servicemembers high priority. For example, 6 of the 11 junior enlisted and 5 of the 6 senior enlisted discussion groups reported having problems with or knowledge of problems with scheduling medical appointments in a timely manner. In particular, one junior enlisted discussion group stated that it can take up to a week to make the appointment through the installation’s designated medical appointment booking system due to, for example, caller wait times making it difficult to get through to make an appointment. Medical leadership from the installation stated they spend a lot of time making sure access to care is consistent across the installation and across ranks, but, in some cases, a servicemember may be upset that the appointment could not be made for the same day. Participants in all 11 discussion groups with junior enlisted servicemembers and in all 6 discussion groups with senior enlisted servicemembers provided comments related to transportation on the installation. More specifically, our analysis identified that in 6 of 11 junior enlisted and 4 of 6 senior enlisted discussion groups, participants stated they had access issues due to the installation’s configuration, limited on- base transportation, or nonownership of personal vehicles that may have inhibited access to on-base services and programs. Senior leadership officials at that installation stated that the barracks where their junior enlisted servicemembers live are not colocated with their work station, which presents challenges for those servicemembers who do not have their own means of transportation. Those officials further stated that it would make more sense for their junior enlisted servicemembers to be housed in the barracks across the street from their work station, but those barracks are used by other units. As another example, one installation we visited had an official on-base shuttle; but participants in our discussion groups stated that the shuttle was viewed as being more for the students and trainees on the installation than for the permanent servicemembers. According to military service leadership, installations have made attempts to rectify the transportation issue, and some installations provide transportation such as on-base shuttles, buses, and unit-provided vehicles. DOD, the military services, and individual installation commanders have many formal mechanisms available, as well as informal mechanisms such as individual feedback to supervisors, to obtain the perspectives of the junior enlisted servicemember population. However, these formal mechanisms—for example, surveys and utilization rate data—and informal mechanisms do not fully capture details about potential access issues occurring on installations, including those identified by junior enlisted servicemembers in our discussion groups. More specifically, we found that the formal and informal mechanisms allow servicemembers, including junior enlisted servicemembers, to provide feedback, express concerns, or make suggestions about on-base services, among other things. The formal mechanisms include surveys, comment cards, and the Interactive Customer Evaluation system, as well as data collected by the installation on utilization rates of the various on- base services and programs. The informal mechanism consists of the junior enlisted servicemember’s chain of command, where junior enlisted servicemembers are able to report concerns and suggestions to their leadership. In a junior enlisted servicemember’s chain of command, the servicemember may provide information—including concerns or issues related to access to on-base services and programs—to his or her first- line supervisor. Junior enlisted servicemembers are also able to contact their unit’s or installation’s senior enlisted advisors, and the base Inspector General’s Office in order to report issues they may have on base. We reviewed the surveys and other formal mechanisms that were provided by DOD, the military services, and the installations we visited. Specifically, we found that the surveys we reviewed (1) asked servicemembers if they used the services and programs; and (2) asked questions about satisfaction with some elements of some select services and programs, but not others. These surveys did not, however, ask questions specific to accessing all services and programs, or provide respondents with the opportunity to address why they are unsatisfied through follow-up or open-ended questions. For example, the Defense Manpower Data Center conducts the Status of Forces Survey of Active Duty Members—a web-based and pen-and- paper administered survey on behalf of the Under Secretary of Defense for Personnel and Readiness—which asks active-duty servicemembers about a range of issues, including overall satisfaction with military life; retention; readiness; deployments; and various on-base services and benefits; among other things. According to the Defense Manpower Data Center’s survey instrument, the purpose and focus of the 2012 Status of Forces Survey of Active Duty Members was to address a total of 29 surveyed topics, including 10 core items covering topics such as overall satisfaction; retention; readiness; and financial health, among other things. With regard to on-base services and programs, we found that the 2012, 2009, and 2007 Status of Forces Surveys of Active Duty Members asked servicemembers about their satisfaction with the (1) hours of operation of the exchange; (2) convenience of locations of the exchange; (3) availability of medical and dental care; (4) ability to get medical and dental care appointments; (5) waiting time in the clinic; and (6) convenience of locations of medical facilities, but did not include similar questions regarding satisfaction with on-base MWR programs or dining facilities. Based on our analysis, we determined that questions on the Status of Forces Surveys of Active Duty Members is focused on satisfaction with only certain elements of a limited number of programs and services and also did not specifically ask whether or with what level of ease servicemembers could access all on-base services and programs. Further, we also found that these surveys did not ask follow-up questions or allow for open-ended responses to obtain data on or the perspectives of servicemembers who were not satisfied with certain elements related to access, for example hours of operation. In addition, the Status of Forces Survey of Active Duty Members at one time included a question about personal vehicle ownership at the servicemember’s duty station. This question, which we found to provide insight into one element of access—transportation—was last included on the 2009 Status of Forces survey, but was not subsequently included in the 2012 survey. According to estimates from the 2009 survey, 79 percent of junior enlisted servicemembers responded that they own personal vehicles. According to a DOD official, the department changes topic focus in each survey conducted and focuses on topics and issues not addressed in previous, recent iterations. The reason for such changes is to avoid the survey from being too cumbersome and overly long for respondents. Similarly, the department conducted the 2014 Morale, Welfare, and Recreation Survey, which specifically addressed services that fall under the MWR umbrella. However, based on our analysis of this survey’s question set, we found that the questions on this survey focused on measuring overall satisfaction with these services and did not ask about the extent to which these services were accessible to servicemembers. Two Marine Corps surveys—The Quality of Life in the United States Marine Corps Active Duty Survey 2012 for Support Systems and The Quality of Life in the United States Marine Corps Active Duty Survey 2012 for Residence—asked respondents broad questions about satisfaction with specific services or their residence, respectively. However, these surveys also did not ask about servicemembers’ access to on-base services or programs. We also found that utilization data collected by the department and installations do not capture whether or why a person could not access the service. For example, medical access to care data are tracked by the individual military treatment facilities and the Defense Health Agency. These data capture, among other things, facilities’ performance in meeting access to care standards for each type of appointment category, specifically with regard to wait time between making an appointment and seeing a medical provider. However, these data do not delineate facilities’ performance in meeting access to care standards by rank or grade. When asked, medical officials at the installations we visited stated that, while they make efforts to remind servicemembers of scheduled appointments and follow-up when appointments are missed, they do not track data by rank, nor do they track data on missed appointments. Further, medical providers at the installations we visited stated they do not document or track the reasons why servicemembers cancel or cannot make appointments. According to installation officials we spoke with at three installations, leadership attempts to informally obtain the perspectives of the junior enlisted populations on the installations they serve. For example, during our visit to Naval Station Norfolk, officials from the Liberty program stated that they held a focus group, with pre-registration, to obtain junior enlisted servicemembers’ perspectives on events and programs at the installation. Once the focus group was conducted, the Liberty Program Manager at Naval Station Norfolk provided an after-action report for the session; however, the value of the report was limited due to the low turnout at the session, which was conducted with three participants. One senior official at Naval Station Norfolk also stated that he meets every individual junior enlisted servicemember upon assignment and arrival to Naval Station Norfolk, as well as walks around the installation and informally interacts with junior enlisted servicemembers at installation meetings. In addition, at Joint Base San Antonio-Randolph Air Force Base, one senior enlisted official stated that he hosts an informal dinner for the junior enlisted servicemembers where he discusses any concerns. In addition, one senior official we spoke with at Fort Campbell recently discussed data-collection efforts and other avenues of soliciting opinions from servicemembers with the installation commander, but no action has been taken to date. These informal mechanisms depend, however, on a commitment by leadership to maintain the efforts to obtain the perspectives of junior enlisted servicemembers. However, even with these various mechanisms to provide feedback to leadership made available to the junior enlisted servicemembers, we also heard concerns at several levels within the department that information may not be reaching leadership or information is not acted upon when received. For example, officials in OSD and the headquarters of the four military services were unsure whether the information collected and obtained by headquarters’ personnel reaches program managers at the installation level. Headquarters officials from the Navy and the Marine Corps were unsure whether there was a problem with dissemination of information, but stated that, if a problem exists, it probably occurs at the installation-program level and not necessarily at the headquarters level of the military services. In addition, participants in 9 of 17 discussion groups—6 junior enlisted discussion groups and 3 senior enlisted discussion groups—stated that they believe information is not reaching installation leadership or that feedback provided through other mechanisms, such as the Interactive Customer Evaluation tool, may be ignored or not received by leadership. Further, participants provided mixed responses in terms of positive and negative perceptions with regard to their experiences receiving leadership follow-up and seeing actions taken in response to their concerns. At the installations we visited, officials stated that potential problems associated with information and feedback sharing may be a result of the military culture, which encourages issues to be addressed at the lowest possible level. Therefore, it would not be necessary to alert the higher levels of installation leadership or the headquarters service level of an issue if it has already been corrected. However, leadership needs information on issues in order to take appropriate action, particularly if trends emerge. Further, according to DOD officials, access to on-base services and programs is not believed to be a widespread problem that warrants a department-wide response. Military service officials stated that the questions related to satisfaction with the services and programs used on existing surveys and other data-collection mechanisms are sufficient to obtain needed data and information on any potential access issues. However, based on our analysis, we found that surveys and other formal mechanisms from all levels of the department—DOD, other departmental agencies, the military services, and the individual installations—do not fully capture data and other information needed to provide leadership with comprehensive insight into the challenges that junior enlisted servicemembers may face when accessing on-base services and programs. Standards for Internal Control in the Federal Government state that agencies should identify, record, and distribute pertinent information to the right people in sufficient detail, in the right form, and at the appropriate time to enable them to carry out their duties and responsibilities. In addition, management should ensure there are adequate means of communicating with, and obtaining information from, those who may have a significant effect on the agency achieving its goals. In September 2001, we reported that top leadership commitment is crucial in developing a vision, initiating organizational change, maintaining open communications, and creating an environment that is receptive to innovation. Part of this, we reported, includes creating an environment of trust and honest communication in which leaders make themselves available to employees, promote open and constructive dialog, and are receptive to ideas and suggestions from employees at all levels. Without reviewing current data-collection mechanisms to help determine whether specific information on junior enlisted servicemember access to on-base services and programs is collected, available, and disseminated to relevant decision makers and making any adjustments, as necessary, to address any identified deficiencies, installation leadership may not be able to take appropriate action based on that information when making decisions about the management of such on-base services and programs. Moreover, officials at DOD and the military services could be unaware of potential access issues that could be resolved through new or updates to policy at the service and departmental levels. We identified a number of efforts under way to identify practices that could enhance services and programs on installations; however, these efforts were not necessarily intended to improve the way DOD and the military services collect information and data from servicemembers— particularly junior enlisted servicemembers—about potential challenges they experience accessing on-base services and programs. Instead, DOD intends for information from these efforts to be collected and shared across DOD for possible adoption and implementation by other military services and installations. Further, we found that, while DOD strategically collects and shares some of this information at the military services’ headquarters level and attempts are made to disseminate information to installations, DOD and military service officials could not clearly identify the extent to which successful initiatives and other such good practices are shared within and across the services and amongst the levels of the department, or the extent to which they focus on junior enlisted servicemembers. As an example of one of these such efforts at the department level, OSD established the Common Services Task Force in 2012 to collaborate on, identify, and implement practices and initiatives—such as identification and elimination of duplicative processes—within the Military Community and Family Policy program areas. The purpose of the task force is to improve organizational effectiveness, increase economies of program delivery, and reduce costs of related overhead functions above the installation level without compromising program delivery to the end-user. More specifically, this task force was asked to review the total cost and methods of providing common services for military servicemember and family support programs DOD-wide; conduct an in-depth review of overhead for 15 separate program areas, including lodging, fitness, aquatic, and wellness programs, among others; and (3) identify possible DOD-wide effects. Officials described the task force as being in its infancy and as a living working group that will continuously evolve and grow over time to best serve the needs of the military servicemembers. The task force also includes representatives from each of the military services to help ensure that practices and initiatives are shared across the services. DOD maintains other boards—for example the Morale, Welfare, and Recreation Transformation and Innovation Working Group—each of which has its own specific programmatic focus. According to officials, all of the military services have representatives on these boards. However, although the efforts of these boards and any resultant action may benefit junior enlisted servicemembers through their efforts to address issues for all servicemembers, the focus of these boards is broader than identifying or addressing issues specific to junior enlisted servicemembers. We also found that each service pursues multiple opportunities to identify other good practices and share information. These efforts—although not focused solely on junior enlisted servicemembers—include the following: Military service officials stated that they look to the practices of colleges and universities when planning their installations and making management decisions about various services and programs. According to a Navy official, the Secretary of Defense mandated a review of the Military Health System in June 2014, with access to care being a specific focus of the review. Recommendations resulting from the review included standardizing training, reporting, and business practices through policy and DOD Instructions. The official further stated that action plans were subsequently submitted by each of the services to address the issues identified by the Military Health System review. Navy officials stated that the Department of the Navy has agreements with universities, such as Indiana University, that conduct research. Navy officials stated they can reach out to these institutions on particular topics to, for example, do a research project and benchmark some of its practices against those at universities or colleges. Marine Corps officials stated that they are involved with the aforementioned department-level groups, but they also hold advocacy working groups that cover practices and initiatives that are shared across Marine Corps installations. According to Air Force Officials, the Department of the Air Force relies on its Services Division located in San Antonio, Texas, to conduct its research. Information garnered from research on any initiatives and good practices is shared with installations via a web link located on the Air Force Portal. The Services Division looks at comparable environments with colleges and universities for common areas and food services and worked with the National Association of College and Food Services. The Air Force also uses data from surveys to validate the other data obtained through the research conducted. In addition to the DOD working groups, Army officials stated that the Department of the Army holds weekly Commander’s Update Briefings with certain commands and that similar Commander Update Briefings are held internally by commands. Officials further stated these weekly meetings and briefings facilitate sharing good practices. In addition, an Army official stated that the Army Family Action Plan is a forum to obtain input from Army servicemembers, which is used to alert commanders and other leaders of areas of concern. This official stated further that issues identified within the service’s single soldier program that cannot be solved within the program may be elevated to the Army Family Action Plan where policy may be enacted or amended to address the issues or concerns. This official also stated that multiple levels within the Army have established resources to document issues and good practices, which are assembled by the single soldier program to capture experiences, successes, and failures, among other things. According to other Army officials, the Office of the Secretary of Defense and the Defense Manpower Data Center surveys are also resources used to improve programs and services. However, we found that these efforts vary in how information is disseminated, which could hinder the department’s and services’ abilities to share information with all relevant individuals for decision-making purposes. For example, Army officials stated that challenges in sharing information have existed over the past 20 years, as the Army does not have a centralized effort to review and fund research initiatives. These officials further stated that research and surveys are conducted at multiple locations, which makes sharing information a huge challenge. In addition, according to officials from the military services, there are possible lapses in the sharing of initiatives and other good practices across installations, which may limit installation program managers’ knowledge of options for successful practices. According to those officials, budget cuts and other travel restrictions have limited attendance at conferences and other face-to-face coordination opportunities and have also potentially caused an information gap between the program service managers at the installations. For example, Air Force officials stated that the service used to hold conferences for managers and directors to attend to share this information; however, sequestration affected travel and funding for these events. Marine Corps officials further stated that if a gap in information sharing of practices and initiatives exists it would likely occur at the program-services level, because restrictions to travel and training have made it harder to bring people together to communicate face-to-face. As a result, according to officials, it is left up to the managers and directors from each of the services to conduct their own research to stay on top of their fields. Standards for Internal Control in the Federal Government state that agencies should identify, record, and distribute pertinent information to the right people in sufficient detail, in the right form, and at the appropriate time to enable them to carry out their duties and responsibilities. The standards also state that identifying and sharing information is an essential part of ensuring effective and efficient use of resources. As indicated above, all of the military services identified existing efforts to gain and share information across the department and among their own installations. However, because these efforts are spread across the department and, in some cases, controlled by different groups, DOD and military service officials were unable to clearly identify or provide documentation on the extent to which the department has reviewed and captured existing methods to determine whether there are other opportunities to address servicemember issues—including, for example, access issues associated with dining facilities, medical care, and transportation identified by junior enlisted servicemembers in our discussion groups. As noted above, our prior work has also shown that top leadership commitment is crucial in developing a vision, initiating organizational change, maintaining open communications, and creating an environment that is receptive to innovation. Even with a commitment from department-level leadership to share initiatives and other good practices, limitations to the sharing of such information within and across the department exist, and information is not reaching those in leadership positions who could benefit from knowledge of other successful, efficient or innovative approaches to addressing access issues for servicemembers and, more specifically, junior enlisted servicemembers. Further, without reviewing existing methods of information sharing on initiatives and other good practices identified at all levels of the department, to include efforts to identify and address junior enlisted access issues and share this information at all levels, the department is missing opportunities to gain valuable information about this population that could, in part, further efforts to provide a quality of life to its servicemembers that encourages them to continue their service and contribute to DOD’s goal of a trained and ready force. Enlisted servicemembers constitute a majority of the active-duty military, and junior enlisted servicemembers are key to maintaining a trained and ready force. To further the department’s goal of a trained and ready force, DOD provides multiple services and programs on its installations— services and programs that servicemembers living in on-base housing, including junior enlisted servicemembers, rely on. The department has a variety of methods to collect data and information on the use of these services and programs, and to share information on initiatives and other good practices across the department, the military services, and installations, but visibility over any access issues experienced by junior enlisted servicemembers is limited. Specifically, without reviewing current data-collection mechanisms to consider appropriate changes that would help collect information directly related to access to on-base services and programs, decision makers have limited visibility into whether services and programs are available to their targeted audience. Similarly, without reviewing existing methods of information sharing on initiatives and other good practices across the department in order to consider ways of better leveraging these methods to address junior enlisted servicemember issues, DOD is missing opportunities to strengthen its provision of on- base services and programs to its personnel. Doing so would further the department’s efforts to provide a quality of life—through its on-base services and programs—that encourages servicemembers to continue in their service and the development of a trained and ready force. To help ensure that junior enlisted servicemembers who need and rely on the services and programs provided on military installations have access when needed and that departmental leadership has visibility over issues affecting this population, we recommend that the Secretary of Defense direct the Office of the Under Secretary of Defense for Personnel and Readiness in collaboration with the Secretaries of the military services and other defense agency leaders to take the following two actions: review current data-collection mechanisms and consider appropriate additions and revisions to help ensure that specific information on junior enlisted servicemember access to on-base services and programs is collected, available, and disseminated to relevant decision makers, and have decision makers take appropriate action on the basis of that information; and review existing methods of information sharing on initiatives and other good practices identified within and across the department, the military services, and individual installations and consider adding mechanisms to better leverage those existing methods—such as the Common Services Task Force—to help ensure that issues associated with junior enlisted servicemember access are identified and, to the extent possible, addressed, and that such information is shared at all levels of the department. In written comments on a draft of this report, DOD concurred with our two recommendations to help ensure that junior enlisted servicemembers have access to on-base services and programs and that departmental leadership has visibility over issues affecting the junior enlisted servicemember population. DOD’s comments are reprinted in appendix III. DOD also provided technical comments on the draft report, which we incorporated as appropriate. Regarding our first recommendation, DOD stated that the department will consider appropriate additions and revisions to its data collection mechanisms to help ensure specific information on the junior enlisted servicemember is collected, available and disseminated to relevant decision makers, and when appropriate, the decision makers can take action based on the information collected. Regarding our second recommendation, DOD stated that a review of existing methods of information sharing on initiatives and other good practices can greatly improve the department’s ability to identify issues associated with junior enlisted servicemember access to base services. DOD further stated that the department will consider adding necessary mechanisms to leverage the methods better; to ensure issues associated with junior enlisted servicemember access are identified and, to the extent possible, addressed; and to share such information at all levels of the department. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense; the Under Secretary of Defense for Personnel and Readiness; the Secretaries of the Army, the Navy, and the Air Force; and the Commandant of the Marine Corps. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. For both objectives, we focused on the population of unaccompanied junior enlisted servicemembers in grades E1 through E4 who reside in on-base housing on Department of Defense (DOD) installations in the continental United States. We focused on these servicemembers because they have unique circumstances—such as youth and inexperience living on their own—and potentially rely more heavily on the on-base services and programs because they reside in on-base housing. Further, for the purposes of this review, we define “access,” with regard to on-base services and programs, as (1) the eligibility to use—that is, through provisions in policy—and (2) the ability to gain entry to, which includes, for example, hours of operation, availably of transportation, and proximity to other on-base facilities, including junior enlisted servicemember housing and work stations. We developed this definition of access based on our review of DOD guidance and other documents related to the users and beneficiaries of these on-base services and programs, as well as through interviews with officials at DOD, the military services, and the four installations we visited. We also used this definition of access with DOD officials to guide our discussions throughout the review. To evaluate the extent to which DOD policies and procedures for on-base services and programs consider access by junior enlisted servicemembers and any factors that influence their implementation, we obtained and analyzed relevant and current DOD, military service, and installation-specific policies and procedures for the services and programs provided to servicemembers on installations. We analyzed these documents to determine whether the policies (1) were current—that is, were updated or were within 10 years old; (2) identified who they applied to; (3) identified the responsible offices or individuals with oversight for the various services and programs; (4) addressed hours of operation; (5) addressed facility standards; (6) identified eligible patrons; and (7) specifically addressed junior enlisted servicemembers. In addition, we interviewed officials from DOD, the military services, and selected installations who have responsibility for implementing the policies and procedures for the services and programs provided to servicemembers on installations and discussed any factors that may affect or impede the provision of services to junior enlisted servicemembers. To select locations of installations for our visits, we analyzed data and demographic information about populations at each military installation in the continental United States. These seven data elements were: (1) the number of junior enlisted servicemembers stationed at the installation; (2) the availability of on-base services, programs, and facilities; (3) ratios of civilian and retiree populations (as available); (4) geographically- dispersed locations; (5) the primary mission of the installation (operational versus training); (6) single service versus joint base; and (7) proximity to the nearest urban center. We selected four military installations to visit— one per military service for each of the Army (Fort Campbell), Navy (Naval Station Norfolk), Marine Corps (Camp Lejeune), and Air Force (Joint Base San Antonio) to reflect a range of the aforementioned factors. Although we selected a range of installation types based on our selection criteria, the locations we visited are not representative of all DOD installations. To evaluate the extent to which DOD and the military services collect and share information and data on junior enlisted servicemember access to on-base services and programs, we analyzed the most recent DOD, service, and selected installation-level data-collection mechanisms, such as surveys and other feedback mechanisms, to identify questions and information related to the use of, access to, and satisfaction with services and programs on military installations, specifically focusing on questions related to access to on-base services and programs and whether they were targeted to the junior enlisted populations. We reviewed the following mechanisms and data sources. Defense Manpower Data Center, 2013 Status of Forces Survey of Active Duty Members, Survey Instrument and Tabulated Responses; Defense Manpower Data Center, 2012 Status of Forces Survey of Active Duty Members, Survey Instrument and Tabulated Responses; Defense Manpower Data Center, 2009 Status of Forces Survey of Active Duty Members, Survey Instrument and Tabulated Responses; Defense Manpower Data Center, 2007 Status of Forces Survey of Active Duty Members, Survey Instrument; DOD Morale, Welfare, and Recreation Customer Satisfaction Survey Department of Defense, TRICARE In-Patient Satisfaction Survey Department of Defense, TRICARE Out-Patient Satisfaction Survey 2014 Military Health System Review Final Report; Navy Region Mid-Atlantic Customer Satisfaction Survey Results; Naval Station Norfolk Morale, Welfare, and Recreation Customer 2012 Army Morale, Welfare, and Recreation Services Survey—Army 2012 Army Morale, Welfare, and Recreation Services Survey—Fort Army Installation Management Command 2011 Leisure Needs Survey Marine Corps 2013 Exchange—Customer Satisfaction Index Survey Quality of Life in the United States Marine Corps Active Duty Survey 2012, Support Systems Domain Analysis; Quality of Life in the United States Marine Corps Active Duty Survey 2012: Residence Domain Analysis; Quality of Life in the United States Marine Corps 2012 Survey Analysis: Executive Brief; Marine Corps Camp Lejeune Chow Hall Utilization Rate Data; Marine Corps Camp Lejeune Single Marine Program Needs Assessment Survey 2014 Template; Marine Corps Camp Lejeune Recreation Center Comment Card and Marine Corps Camp Lejeune Mess Hall Comment Card; Camp Lejeune Community Service Retail Division Report; U.S. Air Force Services Agency 2010 Caring for People Results: Executive Summary of Program and Results (2011); and Air Force 2014 Dining Facility Survey Comments. We reviewed these mechanisms to answer the following two questions: (1) Does the mechanism ask about access to the service/facility? and (2) Does the mechanism directly address junior enlisted servicemembers? We identified and reported on the extent to which such data-collection mechanisms exist and what they are intended to measure. In some limited instances, we identified and reported the results of such mechanisms where questions were deemed related to this review. However, we did not assess the quality or reliability of any of these data- collection mechanisms or the data that resulted from them, because those results did not materially affect this report’s findings. During the visits to four installations, we conducted discussion groups with junior and senior enlisted servicemembers to use as illustrative examples about access to services. We conducted a total of 17 discussion groups—11 with junior enlisted servicemembers and 6 with senior enlisted servicemembers—with approximately 8 to 16 participants per group. Officials at each of the installations selected participants for each group based on specific criteria provided by our team. The criteria, provided to each installation prior to our trip, specified that the participants included in our junior enlisted discussion groups be in grades E1 through E4, reside in on-base unaccompanied housing, and work in a range of occupations, among other things. For the senior enlisted discussion groups, the criteria specified that participants be in grades E7 through E9 and have some supervisory capacity over junior enlisted servicemembers. Additionally, even though the focus of this review is on junior enlisted access to services and programs, senior enlisted servicemembers—E7 through E9—were an important segment to meet with since they manage the junior enlisted population and could provide their perceptions on junior enlisted issues that were potentially raised to them. We designed the composition of our discussion groups to ensure that we spoke with servicemembers from each of the four military services at locations across the continental United States at different types of installations. However, the results of our discussion groups and the comments provided may not be generalized to the entire DOD junior enlisted population. The discussion groups at all locations, with two exceptions, were delineated into three groups: two junior enlisted and one senior enlisted. The first exception was at Joint Base San Antonio-Randolph Air Force Base where we held two discussion groups due to the limited population of servicemembers that met our criteria. The second exception was at Joint Base San Antonio-Fort Sam Houston where discussion groups consisted of the following: junior enlisted Army servicemembers in pay grades E1 through E4; junior enlisted Navy and Air Force servicemembers in pay grades E1 through E4; and senior enlisted Army, Navy, and Air Force servicemembers in pay grades E7 through E9. Our purpose at Joint Base San Antonio was to capture data from servicemembers working at and residing at a joint base managed by a service other than their own, because they may experience issues that would possibly not occur on a predominantly single-service installation, as was the case with Naval Station Norfolk, Fort Campbell, and Camp Lejeune. We used six questions in all of the junior enlisted discussion groups. The questions were as follows: What on-base services and programs do you use the most at this installation? Health Care (Medical/Dental/Mental Health) Recreation Facilities (Gyms/Fitness Centers/MWR programs) Dining Facilities Base Exchanges/Commissaries/Clothing and Sales) What challenges have you had in accessing the on-base services and programs at this installation? Are the challenges related to lack of transportation? Are the challenges related to inconsistent or inadequate hours of operation? For any challenges you have had, have you voiced your concerns to your immediate supervisor or other leadership? If yes, what changes, if any, have you seen as a result of voicing these concerns? What other ways do you have for voicing your concerns (other than voicing your concerns to your immediate supervisor or other leadership)? If participants are silent, ask if they have participated in [optional probe:] Quality of Life or other big surveys? Customer service surveys or feedback at facilities What other suggestions do you have for improving access to on-base services and programs? What things are working well in accessing the on-base services and programs provided at this installation? We also used a similar set of in all the senior enlisted discussion groups: To what extent have you heard or been made aware of junior enlisted personnel having any challenges accessing on-base services and programs? Please describe their challenges accessing on-base services and Health Care (Medical/Dental/Mental Health) Recreation Facilities (Gyms/Fitness Centers/MWR programs) Dining Facilities Base Exchanges/Commissaries/Clothing and Sales) Are the challenges related to lack of transportation? Does your installation have a shuttle system or other on-base transportation? Are the challenges related to inconsistent or inadequate hours of operation? Are some groups such as E1s or E2s experiencing these issues more than other junior enlisted personnel? How devoted is senior leadership (such as the Command Master Chief or Installation Commander) at this installation to addressing the challenges junior enlisted personnel have accessing on-base services and programs? What changes, if any, have resulted from junior enlisted personnel voicing their concerns about accessing on-base services and programs? Other than voicing their concerns to their immediate supervisor, what other means do junior enlisted personnel have for voicing their concerns? Are you aware of any service-wide or installation-specific data- collection methods (surveys, pulse checks, town halls, etc.) that have been used to assess whether junior enlisted have ready access to services on this base? What other suggestions do you have for improving access for junior enlisted personnel to on-base services and programs? Do you have any other comments for GAO regarding access to on- base services and programs at this installation? We developed these questions with one of our methodologists to help ensure they would elicit unbiased responses from discussion group participants. Using content-analysis procedures, we used the responses from each discussion group to create 13 categories that accounted for most comments: commissaries; dining facilities; exchanges; financial assistance; fitness centers and gyms; leadership; legal services; health care; morale, welfare, and recreation (MWR); postal services; surveys and comment cards; transportation; and voting assistance. Categories were further delineated into subcategories based on the specific topic of the comment (e.g. medical care—sick call, medical care—pharmacy, etc.). We then categorized comments from individual participants into the subcategories, including the tone of the comments and to determine to the extent to which the comments about a particular service or program were positive, negative, or neutral. To conduct this analysis, we assessed each comment to assign it to a specific category and for tone (positive, neutral, or negative). Once all comments were assigned to a specific category and subcategory, one analyst tallied up the comments for each category in a spreadsheet. A number 1 was assigned to groups where comment(s) in a specific category were identified and a number 0 was assigned to groups if no comments were identified for a specific category. We then compared the ratings and were able to discern overall tone for each category. Once the first analyst completed the analysis, another analyst reviewed the first analyst’s decisions. Any discrepancies in the coding were resolved through discussion by the analysts. Additionally, we interviewed officials at selected installations—including, but not limited to, the installation commander, the senior enlisted advisor; and those responsible for management of transportation; base design and layout; medical facilities; dining facilities; MWR programs and facilities; and housing—to discuss their knowledge of any access issues experienced by junior enlisted servicemembers at their respective installations. In addition, we obtained information from DOD and military service officials about the department’s efforts to share initiatives and other good practices within and across the services and DOD. We compared the results of our analysis of the data-collection and other information sharing mechanisms we obtained from department officials with Standards for Internal Control in the Federal Government. We conducted this performance audit from August 2014 to May 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. During our visits to four installations—Naval Station Norfolk, Fort Campbell, Camp Lejeune, and Joint Base San Antonio—the participants in our discussion groups provided a range of comments—both positive and negative—about the services and programs on their installations. We captured the comments that junior and senior enlisted servicemembers provided in our discussion groups and categorized them into 13 main categories related to on-base services and programs and other feedback mechanisms that are available to installation leadership. As noted, the participants in our groups provided some positive input about the services and programs, but, based on our analysis of the discussion group comments, we identified specific areas where junior and senior enlisted servicemembers in our discussion groups most frequently expressed concerns about access issues. Those areas included: (1) dining facilities, (2) medical care, and (3) transportation. Participants in our discussion groups identified concerns about on-base dining facilities with regard to (1) access to the dining facility in terms of parking around the facility; distance to the facility; or dining facility closures, and (2) the hours of operation of the dining facilities, among other things. Specifically, participants in 2 of the 11 junior enlisted discussion groups at one installation we visited stated that they either (1) chose not to go to the dining facility or (2) could not find parking in the dining facility parking lot due to overcrowding by people using the lot while visiting an adjacent building. In addition, participants in one junior enlisted group and one senior enlisted group at that same installation stated that for individuals who work on the aviation side of the installation, it takes the entire lunch hour to get to the dining facility, eat, and return to work, which does not leave any time to take care of other tasks during lunch, for example, going to the post office. Another participant stated that it could take the whole lunch hour to drive to the dining facility, and if there is a line during peak lunch times, it can take up to 20 minutes to get in, and they end up missing lunch. Participants in the senior enlisted servicemember discussion group also stated that junior enlisted servicemembers often do not get a set lunch period and grab food when they can, which is not always conducive to the dining facility’s schedule. One junior enlisted group stated that, for some, lunch may be the last meal of the day that they can get from the on-base dining facility, and therefore, any meal a servicemember eats later must be paid for out of their pocket. Participants added that the dining facility on the installation closes at 5:30 p.m. In addition, they stated the only food options on the air side of the installation are two fast-food restaurants. We discussed some concerns identified by junior enlisted servicemembers with the manager of the installation’s dining facility. The manager stated there used to be two dining facilities located on the installation; however, the facility on the aviation side of the installation closed approximately 11 to 12 years ago. As a result, the manager stated that all servicemembers residing on or working on the aviation side of the installation receive basic allowance for subsistence. Additionally, the manager stated wait times to access the facility during lunchtime are negligible; typically 2 to 3 minutes, due to checking identification at the entrance. The manager stated that wait times are slightly extended on days where special celebratory meals are served and other groups—for example, civilians and retirees, are invited to eat at the dining facility. At another installation we visited, participants in one junior enlisted discussion group raised concerns about the hours of operation for that installation’s only dining facility. In that group, a servicemember stated that his work hours had to be adjusted to account for the dining facility’s schedule and to allow time for lunch; however, if work ran late, he had to rush to the dining facility to get dinner before it closed at 6:30 p.m. Another junior enlisted servicemember who did not have his own vehicle and remained on the installation during the holidays stated the dining facility was closed on Christmas Day, and as a result, his only food that day were a few snack food items he had in his dorm room. One junior enlisted servicemember stated that he often works late and is unable to get to the dining facility before it closes for the evening and, as a result, eats fast food from one of the two fast-food restaurants nearby. The participants in the junior enlisted group further stated they have been working through their Dorm Council—a local council that represents unaccompanied junior enlisted servicemembers residing in on-base housing and reports to base leadership—for years to change the dining facility’s hours of operation and as of the time of our visit their efforts had been unsuccessful. Installation leadership stated that they considered closing the installation’s single dining facility due to sequestration in fiscal year 2013. However, they further stated that keeping the dining facility open was important because of the installation’s technical-school student population and a decision was made to keep it operational until additional funds became available to maintain it. Officials also stated that, during holiday periods, dining facility hours are reduced, but some establishments are kept open to provide service to servicemembers and civilians. Participants in 9 of 11 junior enlisted discussion groups raised concerns about the hours of operation for the installation’s dining facilities. A junior enlisted servicemember stated that the hours of operation for the dining facilities were not conducive for servicemembers who did not work a traditional 9:00 a.m. to 5:00 p.m. work shift. Two junior enlisted servicemembers stated that physical training for some units start at 6:00 a.m. and servicemembers sometimes work late into the evenings, and the dining facilities are closed by the time they get off work. Another junior enlisted servicemember stated that he works until 6:00 p.m. and, therefore, cannot make it to the dining facility before it closes. The servicemember stated, as a result, he either has to buy his own food to cook or go out to eat. Participants in one junior enlisted discussion group discussed that the lines at the dining facility are so long it is easier to go home and cook. Also at that installation, the senior enlisted discussion group stated that when a unit’s dining facility was shut down due to deployment, the servicemembers from that unit who remained at the installation had to walk a good distance to get to another facility. He added that the problem seems to happen often as units deploy. When we discussed potential access issues with installation leadership and other program officials from that installation, one official stated that the number of dining facilities at that installation was reduced from 12 to 8, with sequestration driving some of the facility closures. Officials stated that there is enough funding to operate a kitchen staff at five of the eight remaining dining facilities; therefore, some dining facilities are closed at some times during the day and units colocated with a closed dining facility may have to find another facility at which to eat. The officials also told us that when servicemembers are preparing for deployment at odd hours, there is always a temporary dining facility set up at the airfield to make sure the servicemembers can eat before flying out. They further stated that, subsequent to a study on government meals provided to trainees, the installation has taken steps to reduce the number of servicemembers receiving basic allowance for subsistence, therefore, further necessitating that junior enlisted servicemembers eat in the dining facilities. Finally, program officials stated that unit leaderships should afford servicemembers time to eat, and if there is an issue with the hours of operation or with servicemembers getting access to meals, it is up to unit leadership to figure it out; otherwise, it is a failure of leadership if the servicemembers are not getting their meals. Participants in our discussion groups identified challenges with (1) making medical appointments, (2) long wait times for acute care, and (3) lengthy waits to obtain referrals or specialty appointments, even though DOD’s policy is to provide active-duty servicemembers high priority. More specifically, 6 of the 11 junior enlisted and 5 of the 6 senior enlisted discussion groups reported having problems with or knowledge of problems with scheduling medical appointments in a timely manner. For example, at one installation we visited, one junior enlisted discussion group stated that it can take up to a week to make the appointment through the installation’s designated medical appointment booking system, due to, for example, caller wait times making it difficult to get through to make an appointment. Participants in this group also reported that when the appointment booking system did not work, they gave up and instead went to the military urgent care clinic to receive care. Participants in the senior enlisted discussion group at that same installation stated that the best way to get medical care is to call an ambulance because the hospital will see them more quickly. In contrast, medical leadership from the installation stated they spend a lot of time making sure access to care is consistent across the installation and across ranks. They added that if a servicemember shows up at urgent care and is not having a life-threatening situation, he or she may be told to schedule an appointment for the next day with the assigned provider. However, leadership further stated that the servicemember may be upset that the appointment could not be made for the same day. Participants in 6 of 11 junior enlisted and 5 of 6 senior enlisted discussion groups discussed issues with receiving acute care in a timely manner. Specifically, in one junior enlisted discussion group, participants stated that going to the clinic’s urgent care can take all day and it may take from morning until 2:00 p.m. to get an appointment. In our senior enlisted discussion group, participants stated that one of his junior enlisted servicemembers was experiencing symptoms of a heart attack and was having trouble being seen by the clinic. That senior enlisted servicemember stated that his junior enlisted servicemember was told to go to the emergency room and was then put on the “endless cycle” of calling the clinic to go to urgent care, before being told by a medical official that he was not having any issues. The senior enlisted participant added that only the senior enlisted servicemembers see action taken, and the junior enlisted servicemembers do not receive respect by, for example, medical professionals on the installation. Similarly, one junior enlisted discussion group stated that one of the clinics does not have enough personnel resources to take care of everyone and to take the time necessary with each servicemember to figure out what is wrong. Also in that discussion group, one junior enlisted servicemember stated that he went to the emergency room one time because he was feeling dizzy, but the emergency room released him and told him to go to sick call the next day. He added that he did not think that servicemembers cannot be referred out to another clinic, even if there is no availability at their designated clinic. The installation’s medical officials stated that hospital leadership goes out to the clinics to see how many servicemembers are waiting at any one time for care. They added that servicemembers may wait hours for sick call at the individual clinics, but that surgeons who that are attached to the units should be utilized to ease some of the backlog at the clinics. Officials further stated the installation is working to realign where and how servicemembers’ clinic assignments are made, in light of the current downsizing and moving of units around the installation. In addition, participants from 5 of the 11 junior enlisted and 4 of the 6 senior enlisted discussion groups described issues with receiving referrals or specialty medical care. For example, participants in one junior enlisted discussion group told us that getting an appointment to obtain specialty care can take months. In addition, a participant in one junior enlisted discussion group stated that he broke his ankle and went to the installation’s medical clinic for care. The clinic did not give him a cast and, instead, only gave him a splint for the broken ankle. He then had to wait to get an appointment at the nearby military hospital to get a cast, which had additional delays. By the time he was able to get an appointment, his ankle had already started to heal improperly, and the doctor had to go in and re-break his ankle to set it with hardware. In one junior enlisted discussion group, we spoke with a junior enlisted servicemember who stated he had issues getting physical therapy for over a year for an injury he sustained while deployed. The servicemember was supposed to see the physical therapist for three months before seeing the surgeon. He indicated that the physical therapy was making his injury worse and had been told so by the physical therapist, but he was repeatedly sent back and forth between the surgeon and physical therapist. As a result of this injury, the servicemember stated that he is unable to pass a physical training test, cannot deploy, and cannot be promoted. Further, according to this servicemember, the installation would not refer him to see a specialist. When we asked hospital leadership about this specific example, they stated that they were unaware of his circumstances. Participants stated they had access issues due to configuration of the installation, limited on-base transportation, or nonownership of personal vehicles that may have inhibited access to on-base services and programs. For example, one installation we visited had an official on- base shuttle; but participants in our discussion groups stated that the shuttle was viewed as being more for the students and trainees on the installation than for the permanent party servicemembers. In one junior enlisted discussion group, a servicemember who did not own a vehicle stated that she gets up and walks 45 minutes by herself at 4:00 a.m. to get to work in time for the start of her shift. She stated that although she could call for a ride from her leadership, she does not like asking because she felt it placed a burden on unit leadership and made her a nuisance. A servicemember in a senior enlisted discussion group from that same installation stated he has four junior enlisted servicemembers who don’t have vehicles and reside approximately two miles away from where their unit conducts its physical training, which results in a 25 minute walk time each way for these servicemembers. He said there is an on-base shuttle, but the shuttle times are very inconvenient and have limited service on the weekends and holidays. Senior leadership officials at that installation stated that the barracks where their junior enlisted servicemembers live are not colocated with their work station, which presents challenges for those servicemembers who do not have their own means of transportation. Those officials further stated that it would make more sense for their junior enlisted servicemembers to be housed in the barracks across the street from their work station, but those barracks are used by other units. The officials considered this a significant problem for their servicemembers, particularly those without vehicles. One junior enlisted servicemember at an installation we visited stated that the availability of transportation provided by her unit during work hours was pretty good, but during the evenings and weekends she has to take taxis to get anywhere she needs to go. Similarly, participants from one junior enlisted discussion group at another installation said that servicemembers who do not have their own vehicles either have to walk or ask for a ride from someone—a friend or their unit. In addition, participants in discussion groups from two installations stated that using a taxi is expensive and both installations do not have an on-base shuttle system. However, at one installation, participants in our senior enlisted discussion group stated that units provide transportation assistance to their junior enlisted servicemembers using unit vehicles during the work week (Monday through Friday). According to military service leadership, installations have made attempts to rectify the transportation issue, and some installations provide transportation such as on-base shuttles, buses, and unit-provided vehicles. Additionally, according to installation officials at one installation, the installation has an agreement for the city bus to come on the installation and stop at various locations on the installation’s perimeter. However, officials reported that there is limited utilization of the city bus system, and the on-base shuttle was discontinued over 15 years ago. Further, officials stated that the installation is trying to reconfigure the installation to help ensure that on-base housing is part of a 5- to 10- minute walkability plan. Officials at one of the other installations we visited stated there are a few on-base shuttle buses and unit vehicles are available to junior enlisted servicemembers to assist with transportation needs. However, they stated, unit vehicles are not typically available, as they are used by more senior officials for official duties such as attending meetings. In addition to the above named contact, Vincent L. Balloon, Assistant Director; James Ashley; Mary Jo LaCasse; Michael Silver; Justin Snover; Sabrina Streagle; Jennifer Weber; and Erik Wilkins-McKee made key contributions to this report. Defense Health Care: US Family Health Plan is Duplicative and Should be Eliminated. GAO-14-684. Washington, D.C.: July 31, 2014. DOD Health Care: Domestic Health Care for Female Servicemembers. GAO-13-205 Washington, D.C.: January 29, 2013. Military Personnel: DOD Has Taken Steps to Meet the Health Needs of Deployed Servicewomen, but Actions Are Needed to Enhance Care for Sexual Assault Victims. GAO-13-182. Washington, D.C.: January 29, 2013. Questions for the Record Related to Military Compensation. GAO-10-803R. Washington, D.C.: April 1, 2010. Military Personnel: Military and Civilian Pay Comparisons Present Challenges and Are One of Many Tools in Assessing Compensation. GAO-10-561R. Washington, D.C.: April 1, 2010. Military Personnel: DOD Needs to Establish a Strategy and Improve Transparency over Reserve and National Guard Compensation to Manage Significant Growth in Cost. GAO-07-828. Washington, D.C.: June 20, 2007. Military Personnel: DOD Needs to Improve the Transparency and Reassess the Reasonableness, Appropriateness, Affordability, and Sustainability of Its Military Compensation System. GAO-05-798 Washington, D.C.; July 19, 2005. Policy and Criteria Used to Assess Potential Commissary Store Closures. GAO-05-470R. Washington, D.C.: April 26, 2005. Defense Management: Proposed Lodging Policy May Lead to Improvements, but More Actions Are Required. GAO-02-351. Washington, D.C.: March 18, 2002. Morale, Welfare, and Recreation: Information on Military Golf Activities. GAO/NSIAD-94-199FS. Washington, D.C.: July 19, 1994. | Junior enlisted servicemembers constitute more than half of DOD's enlisted force. To sustain the force and help ensure continued growth in all ranks, DOD provides a wide array of services and programs on its military bases, including dining facilities; fitness centers; and medical clinics. Senate Report 113-176 included a provision for GAO to review junior enlisted servicemember access to services and programs on military bases. This report evaluates (1) the extent to which DOD's policies and procedures for on-base services and programs consider access by junior enlisted and what factors influence their implementation; and (2) the extent to which DOD and the military services collect and share information and data on junior enlisted access to on-base services to identify any potential access issues. GAO evaluated DOD, military service, and base policies and data-collection tools; conducted 17 nongeneralizable discussion groups with junior and senior enlisted servicemembers randomly selected at four bases identified to represent a range of size and locations; and interviewed officials from OSD, the services, and four bases. Department of Defense (DOD) policies and procedures at multiple levels—the Office of the Secretary of Defense (OSD), the military services, and four bases GAO visited—govern on-base services and programs and establish access for all servicemembers, including junior enlisted, who are in the early stages of their military career and in the first four of nine pay grades of the military compensation system. Further, implementation is influenced by several factors. GAO found that policies referenced the entire active-duty, enlisted, or base populations, and did not distinguish between specific groups—such as by pay grade or rank. For example, Defense Health Agency policy regarding medical care includes provisions for all active-duty servicemembers, of which junior enlisted servicemembers are a subset, as part of a priority system for access to medical care. Further, at four bases GAO visited, implementation of policies and procedures was influenced by factors such as available budgetary resources and low usage of services or programs. Base officials stated that budget cuts and sequestration diminished their ability to provide services and programs at a level that met current needs of all servicemembers. DOD's efforts to collect data on on-base services and programs do not address junior enlisted servicemember access issues, including those identified in GAO-led discussion groups. Further, DOD has mechanisms for sharing information across the department on initiatives and other good practices, but these also do not focus on junior enlisted servicemember access issues. In all 17 discussion groups, participants provided comments—positive and negative—on access to the following: (1) dining facilities, (2) medical care, and (3) transportation. For example, 6 of 11 junior enlisted discussion groups reported having problems scheduling medical appointments in a timely manner. However, GAO found that formal data-collection mechanisms used by DOD, the military services, and four bases—including surveys, utilization rate data, and town halls—did not fully capture potential access issues related to these type of concerns because they did not include (1) direct questions on access to all services and programs, (2) opportunities to follow up on reasons for dissatisfaction, or (3) options for open-ended responses. For example, DOD's Status of Forces Survey of Active Duty Members asks about satisfaction with hours of operation of the commissary, but does not ask about satisfaction with or access to most other services and programs. According to participants in 9 of 17 discussion groups, feedback from informal mechanisms, such as discussions with supervisors where access may be discussed, may not be relayed to decision makers or acted upon once received. Finally, DOD's information-sharing methods include a number of policy boards with representatives from the services, but the efforts are broader than identifying or addressing issues specific to junior enlisted servicemembers. DOD officials stated that they believe access is not a widespread problem and satisfaction questions and other efforts are sufficient to obtain needed data on access. Without reviewing and considering existing data-collection and information-sharing mechanisms and taking action, DOD is missing opportunities to enhance its efforts to provide services and programs that encourage retention and contribute to DOD's goal of a trained and ready force. GAO recommends that DOD (1) review data collection mechanisms and consider revisions related to junior enlisted access to services, and take action as needed based on the information, and (2) review existing methods of information sharing and consider adding mechanisms to increase visibility over junior enlisted personnel's access to services. DOD concurred with both recommendations. |
LANL is organized in a matrix that allows programs to draw on scientific, engineering, and experimental capabilities from throughout the laboratory. Programs are funded and managed out of LANL’s 15 directorates, such as Weapons Physics or Chemistry, Life and Earth Sciences, but LANL’s scientists and engineers work in 64 technical divisions that are discipline specific. These technical divisions, such as Applied Physics or Biology, accomplish the work of the laboratory and support its operations. Program managers in the directorates fund work in the technical divisions in order to meet milestones determined with NNSA or other work sponsors. To this end, employees in the technical divisions may support multiple programs with their work and may be called on to provide specific expertise to different programs. LANL’s facilities are managed by its directorates and provide specific capabilities, such as high-performance computers, LANL employees use for their work, as well as general office and meeting space. When LANL was originally sited and constructed during the Manhattan Project, according to laboratory officials, its infrastructure was intentionally spread out as a safety and security precaution. What was once a benefit now makes LANL’s management and operation complex. Spread across 40 square miles and including 155 miles of roads, 130 miles of electrical transmission lines, 90 miles of gas transmission lines, and 9.4 million square feet of facility space, LANL employs 12,000 to 14,000 people every day. LANL’s approximately 2,700 structures are grouped together across the laboratory into 49 major technical areas that include major scientific and experimental facilities, environmental cleanup areas, and waste management locations (see fig. 1). However spread out the technical areas, LANL only considers less than 400 acres of its site to be highly suited for development because of the difficulty of developing the site’s steep slopes and because of the need to maintain safety and security buffers around specific work activities. The most heavily developed area of the laboratory is Technical Area-3, LANL’s core scientific and administrative area, which accounts for half of the laboratory’s employees and total floor space. While individual scientific and engineering directorates within LANL are responsible for managing and securing its facilities, multiple programs across these organizations share facilities to accomplish their objectives. For example, LANL’s Chemistry and Metallurgy Research facility is managed by LANL’s Chemistry, Life and Earth Sciences directorate. The facility, however, is occupied by over 500 employees to support a number of programs across LANL that require its analytical chemistry and materials property testing capabilities (see fig. 2). These programs include manufacturing nuclear weapon pits, experimenting with nuclear fuels for civilian energy production, and producing nuclear heat sources for National Aeronautics and Space Administration missions. LANL’s shared facilities are protected at different levels depending on the type and amount of classified resources they house or store. DOE Manual 470.4-2, Physical Protection, defines these different levels and the types of safeguards that must be in place to ensure that classified resources are adequately protected. Table 1 summarizes these security levels and appropriate safeguards from lowest to highest level of security. To determine the overall effectiveness of LANL’s implementation of DOE security requirements and the laboratory’s security performance, two DOE organizations periodically conduct independent reviews. DOE’s Office of Independent Oversight conducts assessments, typically every 18 months. These assessments identify the weaknesses of LANL’s security program and produce findings that laboratory officials must take action to correct. NNSA’s Los Alamos Site Office is also required to conduct surveys annually. These surveys are based on observations of performance, including compliance with DOE and NNSA security directives. While the two types of reviews differently categorize the topics and subtopics they cover, the reviews overlap substantially. They both address security program management, protective forces, physical security, classified information protection, control and accountability of nuclear materials, personnel security, and cyber security. Furthermore, they both use a color- coding system to rate each area of review as either Green (satisfactory or effective), Yellow (marginal or needs improvement), or Red (unsatisfactory or significant weakness). The results of these reviews affect LANS’s ability to earn its performance-based award fee for successful management and operation of LANL. Under the contract between LANS and NNSA for the management and operation of LANL, NNSA is to establish the work to be accomplished by LANL, set requirements to be met, and provide performance direction for what NNSA wants in each of its programs. NNSA does this by annually issuing a performance evaluation plan that documents the process and associated performance objectives, performance incentives, award term incentives, and associated measures and targets for evaluating LANS’s performance. In the performance evaluation plans for fiscal years 2007 and 2008, performance objectives and award fee incentives were specifically provided for security performance. LANL’s contract requires the development of a Contractor Assurance System to increase accountability and improve management and performance. The Contractor Assurance System, according to the LANL official responsible for its implementation, is an integrated performance-based management system that is designed to include independent assessment and that is available as a tool for federal oversight. Notwithstanding the development of the Contractor Assurance System, under the contract with LANS, NNSA preserves its right to conduct direct oversight, particularly in the area of security. The Secretary of Energy has authority under 10 C.F.R. § 824.4(b) of DOE’s Procedural Rules for the Assessment of Civil Penalties for Classified Information Security Violations to issue compliance orders that direct management and operating contractors to take specific corrective actions to remediate deficiencies that contributed to security violations regarding classified information. On July 12, 2007, the Secretary of Energy issued a compliance order to LANS as a result of the security incident uncovered in October 2006 when a subcontractor employee removed classified information from LANL without authorization. Analysis of the incident identified numerous breakdowns in LANL’s classified information protection program and concluded that these breakdowns were caused, in part, by poor security practices. The Compliance Order directs LANS to take comprehensive steps to ensure that it identifies and addresses critical classified information and cyber security deficiencies at LANL. These steps must be completed by December 2008. Violation of the Compliance Order would subject LANS to civil penalties of up to $100,000 per violation per day until compliance is reached. LANL has three major program categories—Nuclear Weapons Science, Threat Reduction Science and Support, and Fundamental Science and Energy. Nuclear Weapons Science programs ensure the safety, performance, and reliability of the U.S. nuclear deterrent. Threat Reduction Science and Support programs support nonproliferation and counterproliferation efforts. Fundamental Science and Energy programs address other national security concerns, particularly energy security, and provide basic scientific capabilities that support laboratory missions. LANL has two support program categories—Environmental Programs and Safeguards and Security. Environmental Programs address the remediation and disposition of waste at LANL. Safeguards and Security programs provide LANL with physical and cyber security protection. In addition to activities across these program categories that are supported by DOE and NNSA, LANL conducts millions of dollars in work for other federal agencies on specific research projects. LANL’s primary mission is to ensure the safety, performance, and reliability of nuclear weapons in the nation’s stockpile without performing underground nuclear weapon tests. It is responsible for the design, evaluation, annual assessment, and certification of the United States’ W76 and W88 submarine launched ballistic missile warheads, the W78 intercontinental ballistic missile warhead, and the B61 nuclear bomb and works in cooperation with NNSA’s other nuclear weapons design laboratories and production plants. Because the United States stopped conducting underground nuclear weapon tests in 1992, LANL weapons scientists and engineers are involved in hundreds of research projects in programs aimed at developing strong physics modeling and predictive capabilities that provide information about nuclear weapons’ performance. Of particular focus since 2001 has been the development of a common methodology, known as “Quantification of Margins and Uncertainties,” for quantifying critical design and engineering factors during the operation of a nuclear weapon and the margin for these factors above which the weapons could fail to perform as designed. Furthermore, LANL is involved in two ongoing life extension programs, for the W76 and B61, which are efforts to refurbish aging weapons and extend their lifetimes for 20 to 30 years. In addition, LANL builds, operates, and maintains the infrastructure necessary to carry out its nuclear weapons mission and to support other laboratory missions. In fiscal year 2007, LANL conducted work on 41 Nuclear Weapons Science programs supported by about 3,400 FTEs and with a budget from NNSA of about $1.5 billion, which represented over half of LANL’s total budget and approximately 87 percent of the funds received from NNSA for all of LANL’s major program categories. Appendix II provides additional detail on LANL’s Nuclear Weapons Science programs. Out of the $1.5 billion total budget for LANL’s Nuclear Weapons Science programs, nearly $560 million—or 37 percent—was budgeted for the operation of the facilities that support these programs, as well as new line item construction projects. In addition, the following five other programs together represent another 45 percent of LANL’s Nuclear Weapons Science budget: Pit Manufacturing and Certification. Since 2001 LANL has been working to reconstitute the nation’s capability to manufacture and certify pits, which was lost when DOE’s Rocky Flats Plant near Denver, Colorado, closed in 1989. This program re-establishes an immediate capability to manufacture pits in support of the nuclear weapons stockpile, plans for long-term pit manufacturing capability, and manufactures specific quantities of W88 pits. In fiscal year 2007—the year LANL delivered the first war reserve W88 pits for the nation’s stockpile—the budget for Pit Manufacturing and Certification was $226.9 million, and the program was supported by 599 FTEs. Advanced Simulation and Computing. To compensate in part for the loss of underground nuclear testing as a means for gathering data on nuclear weapon performance, a program of advanced simulation and computing— hardware, software, and code—was implemented to provide predictive computer models, supported by above ground experimental data and archived data from past underground nuclear tests, that simulate nuclear weapon performance. In fiscal year 2007, the budget for Advanced Simulation and Computing was $202.5 million, and the program was supported by 446 FTEs. Stockpile Services. This program supports research, development, and production work that is applicable to multiple nuclear weapon systems rather than a specific weapon system. For example, scientists may conduct basic research on critical factors of nuclear weapon operations in this program or run tests on components shared by nuclear weapon systems. In fiscal year 2007, the budget for Stockpile Services was $140.7 million, and the program was supported by 361 FTEs. Stockpile Systems. For each weapon type for which LANL is responsible, this program supports routine maintenance; periodic repair; replacement of components; and surveillance testing to assure the weapon type’s continued safety, security, and reliability. In fiscal year 2007, the budget for Stockpile Systems was $67.4 million, and the program was supported by 162 FTEs. Life Extension Program. This program extends the lifetimes of warheads or the components of these warheads to ensure that they continue to perform as designed. LANL is currently focused on programs to extend the lifetimes of the B61 and W76 weapon types by 20 and 30 years, respectively. In fiscal year 2007, the budget for LANL’s life extension programs was $44.1 million, and the programs were supported by 120 FTEs. LANL’s directorate for Weapons Programs is responsible for the conduct of these programs and carries them out primarily through three associate directorates—Weapons Physics, Weapons Engineering, and Stockpile Manufacturing and Support—as well as an office of Weapons Infrastructure. These organizations draw upon scientific, engineering, and experimental capabilities from throughout the laboratory to answer specific points of inquiry and to solve problems related to the nuclear weapons stockpile. For example, the Weapons Physics associate directorate has identified 10 key capabilities that it believes are necessary to ensure that it can execute its weapons program work, many of which also aid scientific work outside of Nuclear Weapons Science programs. These capabilities, which reside in technical organizations outside of the Weapons Program Directorate, include expertise in high-performance computing, dynamic model validation, and radiochemistry. This matrixed approach, according to LANL officials, allows LANL’s technical staff to work among peers in their respective fields and to apply their expertise to Nuclear Weapons Science programs as the need arises. In addition to helping ensure the safety and reliability of the U.S. nuclear deterrent, LANL applies science and technology to reduce the global threat of weapons of mass destruction (WMD), the proliferation of WMD, and terrorism. LANL pursues this mission through programs in three areas. First, the laboratory’s nuclear nonproliferation programs, primarily funded by NNSA, focus on ways to address nuclear and radiological threats domestically and internationally. Second, LANL scientists familiar with WMD support the work of the Intelligence Community. Third, LANL conducts research programs supported by federal agencies, such as the Departments of Defense and Homeland Security, that provide foundational science and technology solutions to defeat chemical, radiological, biological, and nuclear WMD. Programs in these latter two areas are conducted as work for other federal agencies and are discussed in more detail in a subsequent section of this report. In fiscal year 2007, NNSA supported 12 Threat Reduction Science and Support nuclear nonproliferation programs at LANL that relied on over 480 FTEs and had a budget of about $225 million. Of these 12 programs, 9 were budgeted at over $1 million each in fiscal year 2007. Appendix III provides additional detail on these Threat Reduction Science and Support programs. Over 60 percent of the budget NNSA provided to support Threat Reduction Science and Support programs was for two programs: Nonproliferation and Verification Research and Development. This program conducts scientific research and development and provides monitoring, sensing, and measurement technologies to observe the earth from space-based satellites and produces and updates data for ground- based systems in order to detect banned nuclear explosions. In particular, LANL produces electromagnetic pulse and radiation sensors that are integrated into U.S. Air Force satellites and develops algorithms used to process remote sensing data. In fiscal year 2007, the budget for Nonproliferation and Verification Research and Development was $95.5 million, and the program was supported by 254 FTEs. U.S. Surplus Fissile Materials Disposition. NNSA funds efforts to dispose of the country’s surplus plutonium and highly enriched uranium. LANL supports plutonium disposition efforts by developing the processing technologies that will be used in a facility currently planned for construction at the Savannah River Site in South Carolina. This facility will disassemble surplus nuclear weapon pits and convert the plutonium in them into a powder form that can later be fabricated into a fuel useable in commercial nuclear reactors. In fiscal year 2007, LANL’s budget for this plutonium disposition work was $43 million, and the work was supported by 117 FTEs. LANL’s Directorate for Threat Reduction is responsible for conducting the laboratory’s Threat Reduction Science and Support programs. Those programs primarily supported by NNSA are carried out through the directorate’s Nuclear Nonproliferation program office. This office employs scientific, engineering, and experimental capabilities from throughout the laboratory to accomplish program missions. According to LANL officials, these capabilities, such as nuclear device design and radiochemistry, were initially developed to support Nuclear Weapons Science missions but are now being leveraged to support Threat Reduction Science and Support missions. In turn, these officials told us results from Threat Reduction Science and Support programs provide feedback to Nuclear Weapons Science programs. For example, information on techniques to disarm nuclear weapons that are learned in threat reduction work can be used to improve the safety and security of the U.S. nuclear weapons stockpile. As a national security science laboratory, LANL’s mission also includes the development and application of science and technology to solve emerging national security challenges beyond those presented by WMD. LANL’s Fundamental Science and Energy programs are managed by the laboratory’s Science, Technology and Engineering Directorate, and funds to support these programs come from multiple offices within DOE, as well as other federal agencies. In fiscal year 2007, DOE supported 40 programs focusing on energy security—specifically, fossil energy, civilian nuclear energy, alternative energy, and fusion. In addition, DOE supported basic scientific work in such areas as advanced computing, biology, environmental science, nuclear physics, and materials science, as well as Laboratory-Directed Research and Development projects. In total, DOE provided $151 million for Fundamental Science and Energy programs that supported over 380 FTEs. Appendix IV describes, in detail, LANL’s DOE supported Fundamental Science and Energy programs. Work for other federal agencies and Laboratory-Directed Research and Development projects in Fundamental Science and Energy are discussed in a subsequent section of this report. LANL officials told us the laboratory’s Fundamental Science and Energy programs, in conjunction with its Nuclear Weapons Science and Threat Reduction Science and Support programs, provide an integrated approach to national security science because these programs leverage one another’s scientific, engineering, and experimental capabilities. For example, according to a senior LANL Science, Technology and Engineering official, LANL’s Nuclear Weapons Science researchers developed expertise in underground work, such as tunnel boring, to facilitate underground nuclear testing, and this expertise has been translated for use in fossil energy activities. Specifically, the scientists and engineers responsible for the nuclear weapon test readiness program work out of the Fundamental Science and Energy organization. Similarly, capabilities in high-performance computing and simulation utilized by Nuclear Weapons Science programs have been applied to many other national security and Fundamental Science and Energy applications. Furthermore, a senior LANL Nuclear Weapons Science official told us that 7 of the 10 key capabilities identified for Weapons Physics work, such as high-performance computing, computational math and physics, and weapons material properties and characterization, are managed out of the same directorate responsible for LANL’s Fundamental Science and Energy programs. More than one-quarter of LANL’s career employees work in more than one of LANL’s major program areas, and laboratory officials told us a substantial number of employees develop the critical skills needed for the Nuclear Weapons Science and Threat Reduction Science and Support programs by first working in Fundamental Science and Energy programs. LANL’s Environmental Programs support the laboratory’s scientific work by addressing legacy contamination, legacy waste disposition, and new waste at the site produced as a function of programmatic work. This waste is categorized as either legacy—generated before 1998—or newly generated. DOE’s Office of Environmental Management provides funding for activities to remediate legacy contaminated sites and to dispose of legacy waste, and NNSA provides funding for activities to dispose of newly generated waste. LANL charges program organizations for disposition of newly generated waste, providing an additional stream of funds to support Environmental Programs. In fiscal year 2007, DOE’s Office of Environmental Management supported LANL’s legacy remediation and waste activities with a budget of over $146 million that supported about 325 FTEs. Costs and FTEs associated with processing newly generated waste and managing and operating the facilities that process them are paid for by the Nuclear Weapons Science facilities and operations programs discussed above. This work generally amounts to $40 million per year, and 87 FTEs support newly generated waste-processing activities. LANL’s legacy contamination remediation activities focus on remediation of contaminated sites and decontamination and decommissioning of contaminated structures. LANL must complete its work on contaminated sites by 2015 to comply with a Consent Order from the state of New Mexico’s Environment Department to remediate soil and groundwater contamination. According to the LANL official responsible for this work, as of May 2007, LANL had cleaned up 1,434 of the 2,194 contaminated sites; however, the remaining sites are more difficult to address. This LANL official estimated that between 2007 and 2015, remediation of all of the sites will cost approximately $900 million. LANL’s newly generated waste activities focus on liquid and solid waste processing and disposal. Radioactive liquid waste at LANL is processed at the laboratory’s Radioactive Liquid Waste Treatment facility, a building that is 45 years old. Upgrades to the treatment facility are currently under way, and the upgraded facility is expected to be operational by 2010. Solid waste—typically comprising discarded rags, tools, equipment, soils, and other solid materials contaminated by man-made radioactive materials— are processed at LANL’s Technical Area-54 Area G Disposal Site. Engineering and design work has begun on a replacement facility for processing solid waste, and the facility is expected to be operational in 2014. LANL’s Safeguards and Security program aims to provide the laboratory with protection measures that are consistent with the threats and risks detailed in the laboratory’s Site Safeguards and Security Plan. This plan, which NNSA reviews annually, details levels of protection that must be provided in different areas of the laboratory to ensure secure programmatic operations and covers such topics as protective forces, site perimeter security, accountability and control over special nuclear material, protection of hard copy and electronic classified information, alarms, intrusion detection systems, identification badges, and security clearances. In fiscal year 2007, $140 million and over 900 FTEs supported Safeguards and Security operations. In addition, construction projects provide new and upgraded security protection at key areas. Specifically, an additional $48 million was budgeted to support two construction projects in fiscal year 2007. The first is the second phase of the Nuclear Materials Safeguards and Security Upgrade project, which focuses on providing upgraded perimeter protection for the facility at LANL where pits are manufactured. The second project focuses on creating a more secure entry point for vehicle traffic at LANL by establishing access control stations and altering traffic patterns on public roads (see fig. 3). While LANL employs security professionals, the technical divisions, in practice, have been responsible for securing their own classified resources by operating their own vault-type rooms, classified computer networks, and classified work areas. These divisions also operated accountability systems for maintaining control over classified resources. Professional security staff advise technical divisions on security requirements and check on whether established practices are appropriately implemented and managed. More recently, security professionals have been deployed to technical divisions to assist directly with security operations, and according to LANL officials, classified resource protection has been centralized to a greater extent through such actions as consolidating storage of all accountable classified documents into one location. According to LANL, the laboratory’s budget for work for others projects in fiscal year 2007 was $462.4 million—or about 17 percent of the laboratory’s total budgetary resources—and these projects relied on nearly 800 FTEs. NNSA’s Site Office reported that LANL scientists and engineers conducted work on over 1,200 individual projects for other federal agencies and outside entities in fiscal year 2007. Of these 1,200 projects, only 93 had fiscal year 2007 budgets of $1 million or more, and the budgets for these 93 projects totaled about $270 million, or 58 percent of all projects’ budgets in fiscal year 2007. Nearly 60 percent of the $270 million available for these 93 projects came from the following two sources: Defense related intelligence agencies sponsored 26 of the 93 projects. These projects are described by LANL as “International Technology” projects. The Department of Homeland Security sponsored an additional 24 of the 93 projects. The largest of these projects supports the National Infrastructure Simulation and Analysis Center. The National Infrastructure Simulation and Analysis Center applies LANL’s expertise in computer- based modeling and simulation for national response to national security events, such as a nuclear or radiological device explosion or an outbreak of infectious disease. Other projects focus on research and development related to defeating chemical and biological weapons, detecting the movement of radioactive materials, and providing threat assessment capabilities. Work for others activities are concentrated in LANL’s Threat Reduction Science and Support and Fundamental Science and Energy programs. In particular, 27 Threat Reduction Science and Support programs received several hundred million dollars in fiscal year 2007. Twenty Fundamental Science and Energy programs received about $162 million to conduct work for others activities in fiscal year 2007. Of this total, 41 percent came from other DOE entities, such as other national laboratories; 19 percent from the Department of Health and Human Services; 13 percent from the National Aeronautics and Space Administration; and 10 percent from universities and institutions. In addition to programs supported by NNSA, DOE, and other federal and nonfederal work sponsors, LANL supports a program of Laboratory- Directed Research and Development (LDRD) that focuses on forefront areas of science and technology that are relevant to NNSA and DOE missions but are not directly funded by specific NNSA or DOE programs. LDRD projects are largely self-initiated and are funded indirectly by LANL through contributions made by directly funded programs. To this end, funds allocated for use on LDRD projects are not a budgeted expense, but do contribute to the cost of LANL’s work. DOE guidance requires that the maximum funding level for LDRD projects not exceed 8 percent of a laboratory’s total operating and capital equipment budget. In fiscal year 2007, LANL provided just under $130 million to conduct work on 199 LDRD projects involving approximately 470 FTEs. These projects ranged in scope from research on predictive climate modeling, to nanotechnology in semiconductors, to medical technologies, to plutonium sciences. DOE guidance requires that LDRD projects normally conclude within 36 months of inception. To carry out its programs, LANL’s major and support programs operate in a wide variety of shared facilities, ranging from office buildings, to laboratories, to manufacturing facilities for nuclear weapon pits and high explosives. In this regard, LANL officials identified 633 such facilities, which are protected at different security levels. Of these 633 facilities, 607 are used by LANL’s major programs. Table 2 provides information on the different levels of security at which LANL’s major and support program facilities are protected. Facilities with appropriate levels of security house or store a variety of classified resources, ranging from special nuclear material to classified documents. At least 365 facilities are protected in their entirety at the Limited Area level or above, which is sufficient to allow them to store classified documents or perform classified activities. In contrast, Category I special nuclear material will be found in a facility that has all of the protections provided by Limited, Exclusion, Protected, and Material Access Areas. Table 3 provides information on the different types of classified resources housed or stored in these facilities. LANL’s Nuclear Weapons Science programs rely on facilities that house classified resources to a much greater extent than do the laboratory’s Threat Reduction Science and Support or Fundamental Science and Energy programs. In contrast, LANL’s Environmental and Safeguards and Security support programs rely on facilities that house classified resources to a minor extent. Specifically, Nuclear Weapons Science programs use 322 facilities that require security protections for classified resources. Thirty-two of these 322 facilities are protected at the highest levels as Exclusion, Protected, and Material Access Areas. Nuclear Weapons Science programs are the primary users—meaning they use more space in a facility than any of the other major or support programs at LANL—of 28 of these 32 facilities, including LANL’s single Category I special nuclear material facility, known as Plutonium Facility 4 at Technical Area-55. Threat Reduction Science and Support programs use 105 facilities that require security protections for classified resources, 31 of which are protected as Exclusion, Protected, and Material Access Areas. Of these 31, Threat Reduction Science and Support is the primary user of 14, including all of LANL’s facilities for Sensitive Compartmented Information. Finally, Fundamental Science and Energy uses 103 facilities that require security protections for classified resources. While 15 of these are protected as Exclusion, Protected, and Material Access Areas, Fundamental Science and Energy is not the primary user of any of these 15 facilities. Finally, LANL’s Nuclear Weapons Science programs are the primary users of facilities storing or housing different types of classified resources to a greater extent than are LANL’s Threat Reduction Science and Support or Fundamental Science and Energy programs. Table 4 provides information on the primary-user facilities that house or store classified resources, as well as vault-type rooms. LANL has initiatives under way that are principally aimed at reducing, consolidating, and better protecting classified resources, as well as reducing the physical footprint of the laboratory by closing unneeded facilities. LANL officials believe that these initiatives will reduce the risk of incidents that can result in the loss of control over classified resources. In concert with these actions, LANL is implementing a series of engineered and administrative controls to better protect and control classified resources. According to NNSA security officials, the size and geographic dispersal of LANL’s facilities creates challenges for classified operations at the laboratory because classified resources must be shared among programs that use remote facilities. This condition increases the number of instances in which laboratory employees move and hand off classified resources—a situation that has created accountability problems. To address this problem, LANL is reducing classified holdings at the laboratory; consolidating storage of and access to these resources in fewer facilities that are more centrally located and controlled; and where possible, eliminating hard copies and classified removable electronic media by transferring the information to LANL’s classified “red” computer network. Simultaneously, LANL is reducing the overall size of its physical footprint by eliminating facilities that are in poor or failing condition or are excess to mission needs. LANL is undertaking a number of initiatives that security officials believe will improve LANL’s security posture and, thereby, risk to the laboratory’s operations. These initiatives are being managed in the short-term by a Security Improvements Task Force, a multidisciplinary team chartered in January 2007 to improve physical security operations. The Task Force targeted six types of classified resources for immediate consolidation and reduction: (1) accountable classified removable electronic media; (2) classified removable electronic media that do not need to be tracked with an accountability system; (3) classified parts; (4) accountable classified documents; (5) classified documents that do not need to be tracked with an accountability system; and (6) vaults and vault-type rooms. With respect to each type of resource, LANL developed a baseline inventory of resources, identified resources that could be destroyed, or, in the case of vaults and vault-type rooms, emptied and consolidated remaining resources into fewer facilities. As of March 2008, the latest date for which data is available, LANL had significantly reduced and consolidated each of these resources, as described: Accountable classified removable electronic media. LANL reduced the number of pieces of accountable classified removable electronic media actively in use from a high of 87,000 pieces in 2003 to about 4,300 pieces. Classified removable electronic media. LANL instituted a “spring cleaning” project in May 2007 that contributed to the destruction of 610 pieces of classified removable electronic media. According to a senior LANL security official, LANL completed an assessment of its classified removable electronic media holdings in February 2008 and estimates there are approximately 6,500 pieces of nonaccountable classified removable electronic media at the laboratory. Security officials said unneeded media will be destroyed during a second spring cleaning effort in May 2008. Classified parts. LANL has allocated nearly $1.7 million for a project to inventory tens of thousands of classified nuclear weapon parts, destroy those that are no longer useful, and centrally manage those that remain. Through a laboratorywide effort, nearly 30,000 classified parts were identified and destroyed between February 2007 and March 2008 by either melting the parts, grinding them into shapes that are no longer classified, or by blowing them up. According to LANL officials, additional destruction of classified parts is under way. Accountable classified documents. LANL completed consolidation of all accountable documents into a single storage library in November 2007. While accountable classified documents are created and destroyed on an ongoing basis, as of March 2008, LANL was managing just over 6,000 accountable classified documents. Classified documents. According to a senior LANL security official, the laboratory completed an assessment of nonaccountable classified documents in February 2008 and estimates there are approximately 9 million classified documents at the laboratory. From April 2007 through February 2008, LANL destroyed over 1.6 million pages of classified documents, and another destruction effort is planned for May 2008. Vaults and vault-type rooms. LANL has reduced the number of vault-type rooms at the laboratory from 142 to 111 and plans to further reduce the number to 106. One LANL security official said he thought the laboratory could ultimately reduce the number of vault-type rooms to 100. Of the remaining vaults and vault-type rooms, LANL officials told us all have been comprehensively inspected and any security deficiencies remedied. During fiscal year 2007, LANL built a prototype “super vault-type room,” a model for future vault-type room operations, that consolidates classified resources in a highly secure, access-controlled environment staffed by security professionals. According to LANL officials, the super vault-type room has allowed LANL to consolidate 65 percent of its accountable classified removable electronic media holdings in one location. In addition to classified resource storage, the super vault-type room offers classified mailing, scanning, faxing, and printing services, thereby reducing the number of locations, equipment, and people handling classified resources in other parts of the laboratory. In addition, LANL is taking steps to reduce the number of special nuclear material storage facilities that must be protected at the site. In 2000, there were 19 such nuclear facilities at LANL, and by 2006, this number had decreased to 11. LANL plans to further reduce the number of nuclear facilities at the site to five by 2016. The number of facilities that store Category I special nuclear material has already been reduced from nine to one. This remaining Category I facility—LANL’s Plutonium Facility 4 at Technical Area-55 (see fig. 4)—contains the nation’s only plutonium research, development, and manufacturing facility and the laboratory’s only Material Access Area. It is protected with a combination of safeguards that include fences, controlled access points, electronic sensors and surveillance, and armed guards. According to the LANL Director, the laboratory has embarked on a multiyear transformation effort to reduce its facility footprint and better manage its infrastructure investments. Many facilities at LANL were built in the early 1950s and are beginning to show signs of structural or systems failure. Other structures at LANL, such as trailers, are temporary and do not provide quality office or laboratory space. Furthermore, the geographic separation of LANL’s facilities makes effective collaboration difficult, according to LANL program managers. LANL officials told us that reducing the laboratory’s physical footprint will save facility operation costs and reduce deferred maintenance costs, which LANL estimated at $321.5 million in fiscal year 2007. Officials said it will also enhance scientific collaboration and improve safety and security. LANL’s goal in fiscal year 2007 was to reduce its existing facility footprint by 400,000 square feet and to reduce it by a further 1.6 million square feet in fiscal year 2008. To determine which facilities would be reduced, several of LANL’s directorates prepared footprint reduction plans targeting facilities that (1) have significant deferred maintenance costs, (2) are in poor or failing condition, (3) are expensive to maintain because they were not designed or built for energy efficiency, and (4) are considered excess to current and anticipated mission needs. In fiscal year 2007, LANL exceeded its footprint reduction goal by reducing existing facility square footage by just over 500,000 square feet. Seventy-seven facilities were reduced to contribute to this total. According to LANL and NNSA officials, the criteria used to determine whether a facility is considered to be reduced vary. Generally, a facility is considered reduced when it is closed, the utilities have been disconnected, and it is no longer occupied by laboratory employees. However, in at least one instance, LANL considered a portion of a facility to be reduced, while another portion remained occupied and building utilities were still connected. A reduced facility may still require environmental remediation and will eventually require disposition, either through demolition, transfer, or sale. LANL is also introducing engineered and administrative controls to improve the physical security of its remaining classified resources and to reduce the security risks associated with their use. According to LANL, implementing these controls can help reduce errors in handling classified resources and, therefore, reduce risk. The super vault-type room is a solution engineered to address the risk of mishandling accountable classified resources by putting responsibility for these classified resources in the hands of security professionals. A senior LANL security official told us that the laboratory relies on these controls to influence and change laboratory employees’ behavior. For example, a LANL official said increased mandatory and additional random searches of employees leaving vault-type rooms—an engineered control—should help raise employees’ awareness of unauthorized removal of classified documents or media from vault-type rooms. Furthermore, simplifying security orders— an administrative control—should help LANL employees understand and implement their security obligations. Examples of engineered controls, beyond the initiatives to reduce and consolidate the seven types of classified resources discussed above, include improving security perimeters around the laboratory and around specific facilities; adding to and reinforcing existing vehicle access control points; expanding a random drug testing program to include all new and existing LANL employees and subcontractors; increasing random searches performed by protective forces on individuals in secure areas to ensure they are not leaving with classified resources; expanding the classified “red” computer network to a greater number of facilities, further enabling the reduction of accountable and nonaccountable classified electronic media; significantly reducing laboratory computers’ ability to create new accountable and nonaccountable classified removable electronic media; initiating a pilot program to attach radio frequency identification tags to cellular phones and two-way paging devices that set off an alarm when these devices are brought into restricted areas; and upgrading security alarm systems. Examples of administrative controls include issuing manuals to formalize facility operations, maintenance, engineering, training, and safety requirements across LANL; updating and simplifying physical security orders to ensure requirements are easily understood and can be implemented; reinforcing the applicability of security requirements to subcontractors through a meeting and a new appendix to subcontractors’ contracts; enhancing procedures for escorting individuals into vault-type rooms; eliminating the practice of allowing cleared individuals to hold the door for other cleared individuals entering restricted facilities, known as “piggybacking,” by requiring that all individuals entering restricted facilities swipe their badges; implementing Human Performance Assessments of security incidents that identify how a lack of engineered or administrative controls, which can be corrected, contribute to human errors; and reissuing work control policies emphasizing Integrated Safeguards and Security Management, a system intended to provide each LANL employee with a framework for performing work securely and fulfilling individual security responsibilities. Many of the initiatives LANL is undertaking address security findings identified in external evaluations, particularly those conducted by DOE’s Office of Independent Oversight and NNSA’s Site Office. Some of these initiatives are being implemented in response to DOE’s 2007 Compliance Order, which resulted from the October 2006 security incident. Despite these efforts, however, significant security problems have not been fully addressed. Furthermore, in fiscal year 2007 LANL’s initiative to reduce the physical footprint of its site reduced maintenance costs more than it addressed facility security. Between fiscal years 2000 and 2008, DOE’s Office of Independent Oversight issued four complete assessments of security at LANL. Over the same period, NNSA’s Los Alamos Site Office conducted seven surveys of laboratory security. These assessments and surveys identified a variety of security problems at LANL, many of which are being addressed through initiatives LANL is currently implementing. Some examples follow: Inadequate accounting for classified documents. Issues with the adequacy of LANL’s accounting for classified documents were raised by the Site Office in fiscal years 2005 and 2006 and by DOE’s Office of Independent Oversight in fiscal year 2007. These issues related to the inconsistent handling of classified documents by document custodians in LANL’s divisions and to the timeliness of updates to LANL’s classified document and media accountability policies to ensure that they reflected DOE’s policies. Several of LANL’s ongoing security initiatives and engineered and administrative controls are intended to address these concerns by centrally storing and handling accountable classified documents in vaults, vault-type rooms, and the super vault-type room staffed by security professionals and by implementing an automated system to update classification guidance. Inadequate accounting for classified nuclear weapon parts. Findings about the adequacy of LANL’s accounting for classified parts were raised by the Site Office in fiscal year 2001 and by DOE’s Office of Independent Oversight in fiscal years 2003, 2007, and 2008. These findings related to improper marking of classified parts with their appropriate classification level and storage of classified parts in containers and facilities that are considered nonstandard, or out of compliance with DOE rules governing classified resource storage. These rules include requirements for building alarms, frequency of security guard patrols, and facility vulnerability assessments. Furthermore, the DOE Inspector General reviewed LANL’s management of classified parts in 2007 and had additional findings about the inventory systems used to maintain accountability over classified parts. While LANL has not resolved issues related to nonstandard storage (see discussion in a subsequent section of this report), LANL officials told us that by destroying nearly 30,000 classified parts at the laboratory, they have established a goal to reduce the number of nonstandard storage facilities from 24 to 0 by the end of August 2008. LANL is also developing a new, centrally controlled inventory system for tracking classified parts and has created administrative procedures and guidance for the system’s use. Inconsistent efforts to reduce classified holdings. A finding about the consistency of LANL’s efforts to reduce classified holdings was raised by the Site Office in fiscal year 2001. The Site Office noted that despite the existence of LANL procedures for regularly reviewing classified inventories to reduce them to the minimum necessary, routine review and reduction of classified inventories was not occurring. While other surveys and assessments did not discuss this finding, LANL’s current initiatives to reduce accountable and nonaccountable documents and classified removable electronic media, which began in 2003, have significantly reduced holdings, and future classified holdings reduction targets are being developed. Through engineered controls, LANL is also attempting to limit the ability and the need to create new classified removable electronic media and to make the information previously stored on removable media available through the laboratory’s classified computer network. Specifically, to prevent the creation of new media, LANL is removing functions on classified computers that would allow media to be created or copied and is deploying new classified computing systems that do not contain the capability to create removable electronic media. In addition, LANL has undertaken an effort to upload the information stored on classified removable electronic media to the laboratory’s classified computer network before the media are either destroyed or permanently archived. LANL officials said this will reduce the risk that media could be mishandled, thus improving the laboratory’s physical security. However, LANL officials also acknowledged that transferring information from classified media to a classified network represents a shift from physical security risk to cyber security risk. A senior LANL official told us this risk is minimized by ensuring that LANL’s classified network is appropriately protected and access to the network is properly controlled. Insufficient security at vault-type rooms. Findings about the sufficiency of security at LANL’s vault-type rooms were raised by the Site Office in fiscal year 2005 and by DOE’s Office of Independent Oversight in fiscal years 2007 and 2008. These findings concerned the adequacy of security patrols, sensor detection, and unauthorized access. LANL has addressed concerns about vault-type room security through comprehensive physical assessments of all vault-type rooms, and a laboratory security official told us that all identified deficiencies have been remedied. Furthermore, the official told us that in the future LANL intends to recertify vault-type rooms every 2 years, instead of every 3 years. Finally, LANL has reduced the number of vault-type rooms in operation at the laboratory—facilitating more frequent security patrols—and has increased mandatory and random searches of individuals exiting vault-type rooms. LANL is also implementing security initiatives in response to the October 2006 security incident. Specifically, DOE’s July 2007 Compliance Order, which resulted from this incident, required LANL to submit an integrated corrective action plan to address critical security issues at the laboratory, including many of those identified by the Site Office and Office of Independent Oversight since 1999. According to LANL’s analysis of past information and cyber security findings, the root causes of 76 percent of these findings were related to inadequate policies, procedures, or management controls. Correspondingly, many of the administrative controls LANL is now implementing and that it included in its integrated corrective action plan address these policy, procedural, and management problems, including reissuing policies and guidance for improving implementation of Integrated Safeguards and Security Management, which LANL officials told us will help individual employees ensure they execute their security responsibilities as part of their regular work; providing Human Performance Assessments as a component of security incident reports to help managers identify challenges in their work environments that can be improved to reduce the likelihood and severity of security errors made by employees; revising policies for escorting visitors into vault-type rooms to ensure visitors’ access to classified resources is properly limited; and improving communication of security requirements to subcontractors by adding an additional exhibit to their contracts. While many of the initiatives and engineered and administrative controls LANL is implementing address past security concerns, some significant security problems identified by DOE’s Office of Independent Oversight and NNSA’s Site Office have not been fully addressed. Specifically, LANL’s storage of classified parts in unapproved storage containers and its process for ensuring that actions taken to correct security deficiencies are completed have been cited repeatedly in past external evaluations, but LANL has not implemented complete security solutions in these areas. In addition, LANL’s actions to address other long-standing security concerns, such as the laboratory’s process for conducting self-assessments of its security performance and its system for accounting for special nuclear material, have been planned but have not, as yet, been fully implemented. More specific examples include the following: Classified nuclear weapon parts storage. LANL uses General Services Administration-approved security containers for standard storage of classified resources. Classified resources that cannot be readily stored in approved containers—for example, because of their size—are stored in vaults, vault-type rooms, or nonstandard storage facilities. According to LANL officials, there are 24 nonstandard storage areas at the laboratory. Requests for nonstandard storage are made through a process approved by NNSA’s Site Office. LANL management reviews all nonstandard storage requests, and requests are approved by LANL’s Physical Security group. The approval process requires LANL to conduct risk assessments for these nonstandard storage areas. While the Site Office has never independently raised concerns about the adequacy of nonstandard storage areas in its surveys, the Office of Independent Oversight has consistently called attention to this issue. Specifically, in fiscal years 2003, 2007, and 2008, the Office of Independent Oversight noted problems with the safeguards LANL said were in place to protect nonstandard storage areas and questioned the risk assessment methodology LANL has used to determine appropriate protections. In 2007, the Chief of DOE’s Office of Health, Safety and Security, which oversees independent assessments, testified that LANL is overly dependent on nonstandard storage for the protection of many of its classified nuclear weapon parts and that the overall impact of deficiencies in nonstandard storage arrangements on the protection of these parts is substantial. LANL officials told us their goal is to eliminate all 24 nonstandard storage areas at the laboratory by August 2008 and, in the interim, continue to apply for waivers to rules governing standardized storage through the Site Office’s approval process. However, LANL’s plans for eliminating specific nonstandard storage areas show the elimination of one area planned for the second quarter of fiscal year 2009—as much as seven months later than LANL’s August 2008 goal—and four others that will remain nonstandard storage areas. Furthermore, a recent status report on nonstandard storage area elimination activities showed that nearly all activities were at risk of schedule delay. Process for ensuring that corrective actions are completed. When evaluations result in findings of security deficiencies, LANL must prepare a corrective action plan that charts a path forward for resolving the finding. To resolve a deficiency and complete its corrective action plan, DOE requires LANL to conduct a root-cause analysis, risk assessment, and cost-benefit analysis to ensure that the corrective action implemented truly resolves the deficiency identified. In fiscal year 2007, the Office of Independent Oversight questioned the completeness of corrective action plans—some of which did not include the required risk assessments— leading to concerns about whether actions taken to address security deficiencies would in fact prevent recurrence. This concern is similar to our 2003 finding that corrective action plans are often inconsistent with DOE requirements. The fiscal year 2008 Office of Independent Oversight assessment noted that weaknesses in corrective action plans’ causal analyses remain. Specifically, the Office of Independent Oversight found that some corrective action plans’ root-cause analyses were insufficient to properly identify security deficiencies. According to LANL officials, in fiscal year 2008, LANL revised its self-assessment program to ensure that root-cause analyses are included in all corrective action plans and that these plans are sufficient. In fiscal year 2007 the Site Office and the Office of Independent Oversight raised concerns about the timeliness of LANL’s submission of corrective action plans and the length of time it takes to close corrective action plans by resolving findings. The fiscal year 2007 Performance Evaluation Plan that NNSA developed to establish priorities for the laboratory provided LANS with financial incentives totaling over $1 million to complete LANL’s corrective actions on schedule. While the Site Office noted significant improvement in the timeliness and closure of corrective action plans in its fiscal year 2007 survey, LANL did not meet the fiscal year 2007 performance milestone. NNSA’s fiscal year 2008 Performance Evaluation Plan provides LANS with a $100,000 financial incentive to improve the timeliness of corrective action plan development and up to an additional $357,000 to close corrective action plans quickly and on time. Inadequate self-assessment. Concerns about the adequacy of LANL’s assessments of its own security performance were raised by the Site Office in fiscal years 2003, 2005, 2006, and 2007 and by DOE’s Office of Independent Oversight in fiscal year 2008. These concerns related to the comprehensiveness of LANL’s self-assessments, the extent to which self- assessments included discussion of all internal findings, and the extent to which these findings were analyzed and addressed through corrective actions. NNSA provided LANS with a nearly $600,000 financial incentive under the fiscal year 2007 Performance Evaluation Plan to improve LANL’s self-assessment program. According to NNSA’s evaluation of LANL’s fiscal year 2007 performance, LANL did not meet NNSA’s goal but did make progress toward it by significantly improving self-assessment. The Office of Independent Oversight’s fiscal year 2008 assessment also noted improvements but recommended further areas for attention. These recommendations included ensuring that self-assessments address all aspects of each assessment topic, such as classified information protection and physical security. LANL officials said training on conducting self-assessments is currently being developed. Control and accountability system for special nuclear material. DOE requires that LANL maintain a system for tracking special nuclear material inventories, documenting nuclear material transactions, issuing periodic reports, and detecting potential material losses. According to LANL and Site Office security officials, the system LANL uses, known as the Material Accountability and Safeguards System (MASS), is over 20 years old and was developed with a now outdated computer language. While LANL has not reported any incidents involving the loss or diversion of special nuclear material in recent years, the Site Office and Office of Independent Oversight raised concerns in fiscal years 2002, 2003, 2005, 2006, and 2007 related to LANL’s system. Such concerns included the absence of controls in MASS to detect internal transfers of nuclear materials that could result in safeguards category limits being exceeded in time to prevent the transfer. According to a senior LANL official, a project to upgrade the system was approved to proceed in January 2008 and is scheduled to be completed by February 2010 at a cost of $3 million. LANL’s initiative to reduce the physical footprint of its facilities focuses on eliminating facilities that are in poor and failing condition, thus reducing the laboratory’s deferred maintenance burden, which according to a LANL estimate, totaled over $320 million in fiscal year 2007. Additionally, the initiative focuses on facilities that have no enduring mission need, thus avoiding future operations costs. While the footprint reduction plans put together by LANL’s Weapons Physics and Weapons Engineering directorates both state that security improvements would result from facility reduction, LANL officials responsible for setting priorities for reducing facilities told us that the facilities’ security problems were not seriously considered when planning for footprint reduction. In that regard, we found that of the 77 facilities LANL counted toward meeting its footprint reduction goal of 400,000 square feet in fiscal year 2007, only 2 facilities contained any classified resources. Specifically, these two facilities included (1) a large, Limited Area administrative facility that contained six vault-type rooms, stored classified parts, and provided access to LANL’s classified network; and (2) a Limited Area facility used for high explosives work and that provided access to LANL’s classified network. Closing vault-type rooms and eliminating classified network access points has the potential to improve security at LANL by reducing or consolidating the number of classified resources that require security protection. In the case of the administrative building described above, the facility was replaced by a newly constructed administrative building that has 11 vault-type rooms—5 more than the original administrative building contained. However, in commenting on our report, LANL officials said that the new administrative building incorporates more modern safety and security standards than the original administrative building. To this end, the security benefits derived from LANL’s fiscal year 2007 footprint reduction efforts are unclear. In commenting on our report, LANL officials noted that Security and Safeguards Requirement Integration Teams participate in footprint reduction projects to ensure that facilities—and the classified information they house or store—remain secure during the closure process. While subsequent documentation provided by the leader of LANL’s physical security organization does show that Security and Safeguards Requirement Integration Teams assist with facility reduction efforts in this manner, it does not show that these teams evaluate facility security weaknesses as criteria for identifying which facilities at LANL should be closed. DOE, NNSA, and even LANL officials have found that LANL has consistently failed to sustain past security initiatives. For example, in DOE’s 2007 Compliance Order, the Secretary of Energy wrote that although some corrective steps were taken by the previous LANL contractor in response to security incidents, the October 2006 incident demonstrated that problems continued. Similarly, NNSA’s Office of Defense Nuclear Security noted in 2007 that after each security incident at LANL, the laboratory has responded by changing policies and procedures and investing in new equipment and systems. The result, according to the Office of Defense Nuclear Security, had been a steady improvement in security through mitigation of immediate problems; however, the inability to halt what NNSA has characterized as a string of incidents involving the failure to account for classified information demonstrated that LANL had not identified and addressed the root causes of security incidents. In its own analysis of the October 2006 security incident, LANL determined that the incident’s root cause was inconsistent and ineffective implementation of Integrated Safeguards and Security Management principles in its classified work, despite the fact that a DOE policy governing implementation of Integrated Safeguards and Security Management throughout the DOE complex had been in place since at least 2001. In acknowledging the problem of sustaining security improvements, LANL officials described three management approaches they intend to use to ensure that security improvements currently being implemented are sustained over the long-term: (1) DOE’s July 2007 Compliance Order, (2) LANL’s Contractor Assurance System, and (3) NNSA’s annual performance evaluation plans. However, each management approach cited by LANL officials either contains weaknesses that will affect LANL’s ability to fully ensure security initiatives are sustained or is in an early stage of development. Furthermore, our January 2007 findings regarding the NNSA Site Office’s capacity to oversee security at LANL have not yet been addressed. LANL officials told us that completing the efforts required by DOE’s July 2007 Compliance Order would ensure that security improvements are sustained. However, the Compliance Order is not designed to provide LANL with a management tool for sustaining long-term security initiatives or for future security improvement. Rather, it serves as a mechanism for DOE to enforce financial penalties against LANS should LANL fail to implement the required actions that address past security problems. Specifically, the actions required by the Compliance Order must be completed by December 2008. If they are not completed, LANS is subject to civil penalties of up to $100,000 per violation per day. In September 2007 LANL submitted an integrated corrective action plan to DOE in partial fulfillment of Compliance Order requirements. This plan outlined the 27 actions LANL intends to take to address seven critical security issues identified as having contributed to the October 2006 security incident and to meet the requirements of the Compliance Order. Of these seven critical security issues, five pertain to the physical security of classified information and resources. These five issues include the following: LANL has not consistently or effectively implemented the principles and functions of Integrated Safeguards and Security Management in the management of classified work; LANL’s classified information security training is not fully effective; LANL has not provided effective leadership and management in protecting classified information; LANL’s assurance system has not effectively resolved classified information protection issues; and LANL has not, in some cases, effectively sustained corrective actions. The majority of the actions LANL outlined in its plan to address these issues are discrete, rather than representing long-term efforts aimed at improving LANL’s overall security performance. They include, for example, documenting that managers have met with employees to communicate and reinforce expectations with regard to integrating the principles of Integrated Safeguards and Security Management into daily work activities; implementing personnel actions with respect to the October 2006 security incident, such as placing formal reprimands in employees’ personnel files and putting employees on unpaid leave; and revising the laboratory’s policy on escorting visitors into vault-type rooms. While actions of this type should contribute to security improvements in the short-term, discrete actions such as these do not ensure that security initiatives will be sustained over time. Moreover, while the Compliance Order provides a mechanism to assess financial penalties if LANL fails to implement the actions included in its integrated corrective action plan, the mechanism will no longer be available once actions are concluded in December 2008. LANL officials told us they expect to use the laboratory’s new Contractor Assurance System to ensure that security improvements are sustained over time once actions under the Compliance Order are complete in December 2008. However, we found that the extent to which LANL will be able to rely on the Contractor Assurance System to ensure long-term sustainability of security improvements after December 2008 is unclear for two reasons. First, LANL officials told us that the system will not be fully developed or implemented by the time LANL completes its Compliance Order efforts in December 2008. Second, an internal assessment of the Contractor Assurance System found that (1) there is a lack of evidence that the system is being effectively deployed across the laboratory and (2) the measures included in the system may not be meaningful. LANL is designing the Contractor Assurance System to measure and track performance from the top down. Top-level measures, such as meeting program milestones set by NNSA or on-time delivery of products, are in place. Lower-level measures, such as measures of the work processes used to meet milestones and deliverables, are still in development. LANL officials responsible for designing the Contractor Assurance System told us that these lower-level measures are critical to the success of the system because they will provide the data that indicate where work processes are breaking down before milestones or deliverables are delayed. Officials also said that trend analysis from data associated with lower-level measures would indicate areas where security concerns are developing. During fiscal year 2008, LANL officials said they plan to focus on developing lower-level measures, but they will not complete these measures by December 2008. A senior official in NNSA’s Site Office told us it could be another 3 to 4 years before the Contractor Assurance System is fully implemented. In its first internal assessment of the Contractor Assurance System completed in September 2007, LANL found that while the system was operational and met the requirements of the contract between NNSA and LANS, it contained significant weaknesses. For example, while upper-level management uses the system, there are gaps in its use across LANL’s technical divisions and facilities. According to the assessment, these gaps could make the system ineffective. In addition, a LANL official told us that while managers are required to attend training on using the system, many do not yet recognize its usefulness. Moreover, the assessment found that because lower-level process measures have not yet been implemented, it may be difficult to use the system for its stated purpose—to improve management and performance. For example, the assessment found that the Contractor Assurance System cannot yet measure key management and performance indicators, such as budget performance, fiscal accountability, and customer satisfaction or dissatisfaction with LANL products and services. In this regard, a LANL official told us that the Contractor Assurance System is not yet mature enough for laboratory officials to understand the best ways to use it and that LANL managers are still identifying which processes they need to measure in order to gather relevant performance data. In commenting on our report, LANL officials agreed with our assessment of the Contractor Assurance System and noted that efforts to improve its maturity are ongoing. LANL officials told us the laboratory also plans to realize sustained security improvements by meeting the security-related performance incentives in the annual performance evaluation plans NNSA uses to measure performance and determine an award fee. The fiscal year 2007 and fiscal year 2008 performance evaluation plans contain both objective and subjective measures of security performance that are tied to financial incentives. Objective measures of security performance use specific and discrete criteria that are not judgmental, such as achieving a particular score on a security evaluation, while subjective measures of security performance use broad criteria that are judgmental, such as effectiveness of security planning. According to NNSA’s Site Office, the two sets of measures complement each other and allow NNSA to withhold incentive fees when its expectations for effective management and leadership are not met. Site Office officials told us it is possible LANL could achieve success in all of the objective security measures but fail to earn award fees on the basis of its performance assessed with subjective measures. We found that the objective measures included in the performance evaluation plans reward LANL for complying with existing DOE security requirements but do not sufficiently reward LANL for improving its security performance. Of the $51.3 million potentially available for LANS’s total performance-based incentive fee in fiscal year 2008, only $1.43 million is associated with objective measures of security performance. Of this total, $1.4 million is an incentive for compliance with DOE security requirements, and only $30,000 is allocated to forward-looking and laboratorywide security improvement. According to a senior NNSA security official, compliance with DOE requirements does not assure that LANL’s security program is functioning effectively, and actions to achieve compliance may not be valuable unless the actions also address management or operational needs. Specifically, in fiscal year 2008, we found the following objective provisions: $800,000 to achieve the milestones LANL sets in an annual security operating plan, which aligns LANL’s security activities with its budget. The fiscal year 2008 annual security operating plan provides a roadmap for LANL security program compliance with DOE requirements and includes milestones such as submitting the Site Safeguards and Security Plan, conducting security training, publishing security policy, completing quarterly equipment maintenance requirements, and conducting inventories of special nuclear material. $200,000 to achieve an overall satisfactory rating on the Site Office’s annual security survey. $400,000 to achieve 90 percent of the milestones associated with the ongoing Phase 2 Nuclear Materials Safeguards and Security Upgrade construction project. $30,000 to develop a forward-looking Safeguards and Security Modernization Plan, which according to a senior Site Office official, is in progress. This official said the Site Office expects LANL to deliver a plan that can begin to be implemented in fiscal year 2009, if the budget allows. However, the official also said the Site Office has not provided any criteria or guidance to LANL about what the plan should include. The objective measures for security performance established under the fiscal year 2007 Performance Evaluation Plan were similar to those established in fiscal year 2008. Specifically, for fiscal year 2007, we found the following incentive provisions: about $1.2 million to achieve the milestones in the fiscal year 2007 annual security operating plan, which were as compliance-oriented as they are in the fiscal year 2008 annual security operating plan; about $670,000 to ensure that inventories of special nuclear material accurately detected any gain or loss of material, excluding legacy material; about $560,000 if DOE validated that LANL’s Safeguards and Security program was rated “effective” on five of seven ratings contained in the Office of Independent Oversight assessment and was rated overall “satisfactory” in the Site Office survey; and about $270,000 to achieve all of the milestones included in the fiscal year 2007 annual operating plan for cyber security. Financial incentives associated with objective measures of security performance totaled nearly $2.7 million in fiscal year 2007. The entire $2.7 million encouraged LANL to comply with existing DOE requirements for effective security operations. LANL earned $2.4 million of the $2.7 million potentially available, despite the occurrence of the October 2006 security incident. NNSA increased the potential performance award fee associated with subjective measures for laboratory performance in fiscal year 2007 as a result of the October 2006 security incident and also included subjective measures in the fiscal year 2008 Performance Evaluation Plan. These measures evaluate LANS’s leadership in integrating programs, including security, across the laboratory and achieving exemplary overall laboratory performance. We found that these measures are neither compliance-based nor forward-looking, but rather focus on overall quality of performance. In fiscal year 2007, LANL received its lowest performance rating in this category, earning only 35 percent of the over $10 million potentially available. LANL’s low performance rating directly reflected the occurrence of the October 2006 security incident. In fiscal year 2008, the award fee potentially available for successful achievement of subjective measures is $10.3 million, approximately $125,000 more than in fiscal year 2007. One of the 20 criteria NNSA will consider in determining the fiscal year 2008 award fee in this area is specific to overall performance, timeliness, and effectiveness of security commitments. A senior Site Office official told us that security performance will also be considered when NNSA evaluates overall laboratory leadership and management. However, according to Site Office officials, NNSA has not yet determined how it will weigh security against other criteria, such as Weapons or Threat Reduction program performance, when determining how much of the award fee LANS will earn for achieving subjective performance measures. While it is important for LANL to continue to improve the performance of its security programs through the use of the management tools previously discussed, the Site Office must still directly oversee LANL’s security program. Specifically, the Site Office is required to conduct a comprehensive annual survey of LANL’s Safeguards and Security performance to assure DOE that the site is appropriately protected. These surveys must be validated through, among other things, document reviews, performance testing, direct observation, and interviews. To conduct these surveys, as well as routine oversight, the Site Office must be appropriately staffed with trained professionals. In our January 2007 report on the effectiveness of NNSA’s management of its security programs, we found that NNSA’s site offices—including the Los Alamos Site Office—suffered from shortages of security personnel, lacked adequate training resources and opportunities for site office security staff, and lacked data to determine the overall effectiveness of its Safeguards and Security program. We reported that these factors contributed to weakness in NNSA’s oversight of security at its laboratories and production facilities. During the course of this review, senior Los Alamos Site Office officials confirmed that these problems persist. For example, they said NNSA has not developed a strategy for determining long-term staffing needs at the Site Office. As of October 2007, the Site Office employed 13 security staff—enough for one person to oversee each of the topical areas the Site Office had to evaluate. This staffing level, officials said, was sufficient to cover only 15 percent of LANL’s facilities. More recently, a senior security official at the Site Office said security staffing levels have decreased since October 2007. Furthermore, while NNSA had identified the need to train and certify Site Office security personnel in nuclear material control and accountability, vulnerability assessment, and personnel security, no specific funding for this training has been made available according to Site Office officials. According to the Los Alamos Site Office’s Site Manager, the Site Office must employ expertise sufficient to determine, through effective oversight activities, whether LANL is implementing the policies and plans that it puts forward. Accomplishing the mission of conducting world-class scientific work at Los Alamos National Laboratory requires the laboratory to maintain a security program that effectively addresses current security risks, anticipates future security risks, and ensures that initiatives to address both current and future risks are sustained over the long-term. While LANL has focused its attention on fixing current security risks in reaction to recent incidents and has implemented initiatives that address a number of previously identified security concerns, LANL has not developed the long-term strategic framework necessary to ensure that these fixes are sustained over time. In addition, some important security problems identified in external evaluations have not been fully addressed. Moreover, our review pointed out the potential for cyber security risks to increase as a result of actions to improve physical security. Consequently, while LANL security officials have indicated their desire to prevent future security incidents, we believe that only a long-term, integrated strategy can help ensure that they will succeed. Continuously implementing security improvement initiatives over the long- term and proactively addressing new security risks also requires an effective process for assessing contractor performance on security activities. We believe the relative immaturity of and weaknesses in the management approaches LANL and NNSA intend to use to ensure that security improvements are sustained may limit their effectiveness and result in a failure to sustain security improvement initiatives. Specifically, DOE’s Compliance Order requires LANL to take immediate actions to improve security deficiencies, but the Compliance Order does not serve as a tool for ensuring these actions are sustained. In addition, we have doubts that LANL’s Contractor Assurance System can sustain security improvement initiatives until it is sufficiently mature, which may take several years. Therefore, we believe performance evaluation plans hold the most promise for ensuring that security initiatives are sustained over the long-term. When the LANL management and operating contract was competed in 2005, laboratory security was a key consideration. NNSA stated that it intended to put a contract in place, along with an annual performance evaluation plan, that would communicate its priorities and provide incentives to accomplish those priorities. However, despite NNSA’s persistent statements about the importance of security, we believe that the performance evaluation plans that NNSA has issued under the new LANS contract do not provide meaningful financial incentives for strategic security improvements or communicate to LANL that security is a top federal priority. Rather than reward LANL for principally complying with current DOE security requirements, in our view, financial incentives in performance evaluation plans should be focused on the long-term improvement of security program effectiveness to a greater extent. We believe that LANL needs to develop a strategic plan for laboratory security that is comprehensive, contains solutions to address all previously identified security findings, takes an integrated view of physical and cyber security, provides opportunities for periodic updates to ensure additional security risks are identified and addressed, and is tied to meaningful performance incentive fees. Finally, as LANL plans for further reductions in its facility footprint, it has an opportunity to assess facilities’ security weaknesses, as well as their deferred maintenance burdens and their anticipated contributions to future program missions, when it first determines which facilities should be reduced. In our view, including an assessment of facilities’ security weaknesses in this initial decision-making process would enhance the security benefits derived from the effort to reduce the footprint. To improve security at Los Alamos National Laboratory, we recommend that the Secretary of Energy and the Administrator of NNSA require LANL to develop a comprehensive strategic plan for laboratory security that (1) addresses all previously identified security weaknesses, (2) contains specific and objective measures for developing and implementing solutions that address previously identified security weaknesses and against which performance can be evaluated, (3) takes an integrated view of physical and cyber security, (4) focuses on improving security program effectiveness, and (5) provides for periodic review and assessment of the strategic plan to ensure LANL identifies any additional security risks and addresses them. To ensure sustained improvement of LANL’s security program, we recommend that the Administrator of NNSA provide meaningful financial incentives in future performance evaluation plans for implementation of this comprehensive strategic plan for laboratory security. To enhance security initiatives already under way at LANL, we recommend that NNSA require that future laboratory plans for footprint reduction include specific criteria for evaluating facilities’ security risks when making initial selections of facilities for footprint reduction. We provided NNSA with a copy of this report for review and comment. NNSA did not specifically comment on our recommendations. However, NNSA stated that while there is still much to be accomplished, NNSA believes that progress has been made in addressing reductions in classified parts, classified documents, vaults, and vault-type rooms, as well as with the implementation of engineered controls. While we acknowledge LANL’s progress in our report, NNSA noted that several security problems at LANL addressed in the report—specifically, nonstandard storage of classified parts and the maturation of contractor assurance systems—are issues for the broader nuclear weapons complex as well. Overall, we continue to believe that the key issue is that NNSA and LANL cannot ensure that initiatives such as these will be sustained, or that changing security vulnerabilities will be identified and proactively addressed, without implementing our recommendations for a long-term strategic framework for security that effectively assesses contractor performance. NNSA’s comments on our draft report are included in appendix V. NNSA also provided technical comments from LANL, which we have incorporated into this report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of Energy, and the Administrator of NNSA. We will also make copies available to others upon request. In addition, the report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512- 3481 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. To identify Los Alamos National Laboratory’s (LANL) major programs, we collected Department of Energy (DOE) and LANL budget, program, and activities documentation. This documentation included data on work LANL conducts for other federal agencies and nonfederal organizations, as well as projects LANL undertakes at its own direction. We used this documentation to identify major program categories and to group LANL’s activities within them. Specifically, we identified three major program categories—Nuclear Weapons Science, Threat Reduction Science and Support, and Fundamental Science and Energy; and two key support programs—Environmental Programs and Safeguards and Security. LANL officials reviewed and validated our results, and based on feedback they provided, we made adjustments as needed. To determine the extent to which LANL’s major and support programs rely on classified resources to meet their objectives, we collected information on classified resource use on a facility basis. Although we initially requested data on each program’s use of classified resources, this data was not available because LANL maintains this data on a facility basis. LANL’s facilities are shared in a matrix management approach by the laboratory’s 64 technical divisions to execute programs. To enhance the accuracy and completeness of the facility-level information we collected, we developed a data collection instrument for LANL officials to complete that included specific data fields and definitions. To select the facilities for inclusion in this data collection instrument, we used LANL’s real property catalogue, which lists each of the 1,283 facilities on the laboratory’s campus. From this list, we excluded facilities containing only utility services, such as steam plants, and facilities with full-time occupancies of fewer than 10 people, unless the facility, based on its use for experiments, could potentially house or store classified resources. We also allowed like-facilities, such as individual bunkers used for high explosives testing, to be grouped together as one facility. Using these definitions, LANL officials determined that 633 facilities should be included in our review. We compared the facilities LANL had selected with the original real property list and agreed the 633 facilities selected by LANL represented the appropriate facilities for our analysis. Using the data collection instrument we had provided, LANL officials entered information on (1) the security protection level of each of the 633 facilities, as described by DOE Manual 470.4-2, Physical Protection, which defines different levels of security depending on the type and amount of classified resources these facilities store or house; (2) the types of classified resources housed or stored in each facility; (3) where practical, how many of each type of classified resource each facility stores or houses; (4) which of the laboratory’s major and support programs rely on the classified resources in each facility; and (5) how much space each of the laboratory’s major and support programs use in each facility as a percentage of that facility’s gross square footage. We analyzed the data by aggregating facilities by program and apportioned classified resource usage according to three categories: (1) a program is the exclusive user of all of the space in a facility storing or housing classified resources, (2) a program is the primary user of space in a facility storing or housing classified resources because it uses more space than any of the other major or support programs at LANL, and (3) a program uses some space in a facility storing or housing classified resources. Because our analysis focused on facilities used for one of LANL’s three major programs, we excluded facilities only used by laboratory support programs, resulting in final analysis of 607 of the original 633 facilities. To evaluate the completeness and accuracy of the information LANL officials provided, we compared the data with other documentary and testimonial evidence we collected during the course of our review to ensure that the data were consistent. For example, we had received briefings about the reduction of vault-type rooms at LANL, and we ensured that the total number of vault-type rooms LANL program managers had discussed with us during these briefings matched the total number of vault-type rooms identified in the facility data LANL provided. In addition, we compared the data provided on the security levels of specific facilities with our physical observations of security safeguards at these same facilities during site visits to determine whether the data LANL officials provided were consistent with our experiences at those facilities. We also conducted logic and electronic tests of the data and followed up with LANL officials to resolve discrepancies. We determined that these data were sufficiently reliable for our purposes. To identify the initiatives LANL is taking to consolidate its classified resources and reduce the scope of its physical footprint, we collected and reviewed data on LANL’s plans for consolidating classified resources and interviewed key LANL, National Nuclear Security Administration (NNSA), and DOE officials. We also toured LANL facilities that house and store classified resources, such as vault-type rooms and the super vault-type room, and visited a facility where classified nuclear weapon parts are being destroyed. In addition, we identified the buildings that LANL was proposing to close as part of its footprint reduction effort and, using the information provided by LANL officials in response to our data collection instrument, determined whether closing these buildings could improve LANL’s security posture by eliminating or consolidating the classified resources that may have been stored or housed in them as a result of footprint reduction. Finally, we visited sites currently undergoing closure and sites proposed for consolidation and reduction. To determine if LANL’s security initiatives address previously identified security concerns, we reviewed security evaluations conducted by DOE’s Office of Independent Oversight and NNSA’s Site Office from fiscal years 2000 to 2008 and identified the security concerns raised by these evaluations. We then compared LANL’s current initiatives with the results of our review of the security evaluations to determine if all of the security concerns were being addressed. We discussed the results of this analysis with DOE, NNSA headquarters, NNSA Site Office, and LANL contractor officials. In addition, we reviewed relevant DOE Office of Inspector General reports. To determine whether the management approach LANL is implementing under the new LANS contract is sufficient to ensure that LANL’s security improvement initiatives are fully implemented and sustainable, we asked LANL and NNSA to identify how they intended to sustain security improvements and ensure the effectiveness of LANL’s security. We reviewed the management approaches they identified, specifically (1) LANL’s actions in response to DOE’s July 2007 Compliance Order resulting from the October 2006 security incident, (2) the security-related aspects of the new Contractor Assurance System LANL is implementing, and (3) the incentives being used to improve security at LANL under the 2007 and 2008 Performance Evaluation Plans. As part of this review, we determined the extent to which each of these management approaches could sustain security improvement initiatives over the long-term and the extent to which these management approaches focused on either compliance with DOE security requirements or improved effectiveness of LANL’s security program. We discussed these management approaches with LANL, NNSA headquarters, and NNSA Site Office officials. We conducted this performance audit from March 2007 to June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. LANL conducted work on 41 Nuclear Weapons Science programs in fiscal year 2007, all of which were supported by NNSA. When program objectives are shared, they have been combined in the table below. Supports the operation and maintenance of facilities and infrastructure that support the accomplishment of Nuclear Weapons Science programmatic missions Re-establishes an immediate capability to manufacture pits in support of the nuclear weapons stockpile, plans for long-term pit manufacturing capability, and manufactures specific quantities of W88 pits Supports the construction of new facilities and significant upgrades to existing facilities Provides the advanced computing infrastructure—hardware, software, and code—to simulate nuclear weapon performance Conducts research, development, and production work that is applicable to multiple nuclear weapon systems, as opposed to a specific weapons system (for example, basic research on critical factors of nuclear weapon operations) LANL conducted work on 12 Threat Reduction Science and Support programs in fiscal year 2007 that were supported by NNSA. Of these 12 programs, 9 had budgets in fiscal year 2007 that exceeded $1 million each. Information about these programs is in the table below. In addition to the individual named above, James Noel, Assistant Director; Nabajyoti Barkakati; Allison Bawden; Omari Norman; Rachael Schacherer; Rebecca Shea; Carol Herrnstadt Shulman; and Greg Wilshusen made key contributions to this report. | In 2006, a Los Alamos National Laboratory (LANL) contract employee unlawfully removed classified information from the laboratory. This was the latest in a series of high-profile security incidents at LANL spanning almost a decade. LANL conducts research on nuclear weapons and other national security areas for the National Nuclear Security Administration (NNSA). GAO was asked to (1) identify LANL's major programs and activities and how much they rely on classified resources; (2) identify initiatives LANL is taking to reduce and consolidate its classified resources and physical footprint and the extent to which these initiatives address earlier security concerns; and (3) determine whether its new management approaches will sustain security improvements over the long-term. To carry out its work, GAO analyzed LANL data; reviewed policies, plans, and budgets; and interviewed officials. With fiscal year 2007 budget authority of about $2.7 billion, LANL conducts work on over 175 programs that can be grouped into three major program categories--Nuclear Weapons Science, Threat Reduction Science and Support, and Fundamental Science and Energy--and two support program categories--Environmental Programs and Safeguards and Security. Respectively, LANL's major programs serve to ensure the safety, performance, and reliability of the U.S. nuclear deterrent; support nonproliferation and counterproliferation efforts; and address energy security and other emerging national security challenges. LANL's Nuclear Weapons Science programs are the primary users of the facilities housing classified resources. For example, the Nuclear Weapons Science programs are the primary users of 14 facilities that store special nuclear material while LANL's other major programs are the primary users of only 7 such facilities. LANL has over two dozen initiatives under way that are principally aimed at reducing, consolidating, and better protecting classified resources, as well as reducing the physical footprint of the laboratory by closing unneeded facilities. While many of these initiatives address security concerns identified through past external evaluations--such as efforts to consolidate storage of classified documents and media into fewer secure facilities and to destroy unneeded classified nuclear weapon parts--significant security problems at LANL have received insufficient attention. Specifically, LANL has not implemented complete security solutions to address either classified parts storage in unapproved storage containers or weaknesses in its process for ensuring that actions taken to correct security deficiencies are completed. LANL intends to use three management approaches to sustain the security improvements it has been able to achieve to this point over the long-term: (1) undertake management actions required of LANL under the Compliance Order issued by the Secretary of Energy as a result of the 2006 security incident, (2) develop a Contractor Assurance System to measure and improve LANL's performance and management, and (3) implement annual performance evaluation plans NNSA uses to measure LANL's performance and determine a contract award fee. These approaches contain weaknesses that raise doubts about their ability to sustain security improvements over the long-term. Specifically, the actions LANL has proposed to take to meet the terms of the Compliance Order are only short-term--with completion planned for December 2008. Further, according to LANL officials, the Contractor Assurance System is not fully deployed and the measures it includes may not be fully effective. Finally, the annual performance evaluation plans do not sufficiently reward improving long-term security program effectiveness. |
BIA and FHWA (through its Office of Federal Lands Highway) jointly administer the TTP, to address transportation needs of tribes. The TTP is funded through the highway account of the Highway Trust Fund and is designed to address eligible transportation-related activities on tribal lands. Activities eligible for program funding include planning, design, construction, and maintenance of roads listed in the NTTFI. Program funding is distributed to tribes by formula after “set-asides”—funding amounts that the Secretary of Transportation may or must deduct from the funding for various purposes—are determined. Program funds can also be used for the state and local matching share of apportioned federal-aid highway funds. Tribes may select from various federal contracts and agreements to implement their transportation programs. BIA maintains the NTTFI data system, which includes transportation facilities—existing and proposed—on Indian reservations and within tribal communities and all public roads on tribal lands. The purpose of the BIA RMP is to preserve, repair, and restore the BIA system of bridges and roadways and to ensure that TTP-eligible highway structures are maintained. The RMP is designed to address the maintenance needs of roads owned by the BIA. RMP activities include routine and emergency road maintenance, bridge maintenance, and snow and ice removal, among other things. Road maintenance does not include new construction, improvement, or reconstruction. BIA has 12 regions, two of which do not have any BIA roads—the Alaska and Eastern Oklahoma BIA Regions. The BIA Division of Transportation operates and maintains the BIA road system through the remaining 10 regional offices. BIA roads—which are also in the NTTFI—are open to the public and are often major access corridors for tribal communities and the public. The road system consists of more than 930 BIA-owned bridges, one ferry system, and approximately 29,000 miles of proposed and existing roads. About 75 percent of the existing roads are not paved. Five of the 10 BIA regions that have BIA roads—the Western, Navajo, Southwestern, Northwestern, and Rocky Mountain BIA Regions—have about 80 percent of the total BIA road miles. About 550,000 Indian students are enrolled in public elementary and secondary (kindergarten to grade 12) schools in the United States, not counting BIE schools. In addition, BIE funds 185 schools serving about 41,000 students living on or near tribal lands. (See fig. 1.) BIE operates about one-third of its schools directly and tribes operate the other two- thirds mostly through federal grants. Unlike public schools, BIE schools receive nearly all of their funding from the federal government, including about $50 million annually to transport students. We recently placed federal programs serving tribes, including BIE’s administration of education programs, on our High-Risk Series. On tribal lands, Indian elementary and secondary students generally attend either public or BIE schools. The majority of Indian students on tribal lands are enrolled in public school districts. In some cases, students may have a choice to attend either public or BIE schools, and they do not necessarily enroll in the school closest to their home. On certain tribal lands, there may only be one school. Data fields pertaining to road inventory in BIA’s NTTFI and DMR databases are useful for the purpose of identifying roads eligible for federal tribal funding. However, we found that data fields pertaining to the description and condition of roads in the NTTFI are not complete, accurate, or consistently collected. As a result, road-description and condition data may lack the accuracy needed for reporting and agency oversight efforts, calling into question the usefulness of maintaining these NTTFI data fields. Similarly, we found that the DMR system, which BIA uses to report information on maintenance of BIA-owned roads, contains data that are not accurate. These data issues compromise FHWA’s and BIA’s ability to support efforts to oversee the TTP and RMP which fund roads on tribal lands, including maintaining and improving the federally owned roads for which BIA is responsible. We found that NTTFI inventory data—such as road location, length, and ownership—are reasonably complete and accurate, and therefore useful, for identifying roads eligible for TTP funding. This assessment is based on our electronic testing and review of BIA’s process for entering new data for these fields. For example, we found that inventory data were complete, in that fields associated with roads in the inventory were reasonably complete and within expected ranges. In addition, controls are in place to ensure accuracy such as when new road segments are proposed for TTP eligibility or are updated by tribes, BIA reviews those submissions, including road survey information, to verify the road before it is made official in the system. The NTTFI road inventory identifies about 161,000 miles of existing and proposed roads on tribal lands that are eligible for TTP funding. The inventory spans 12 BIA regions and includes roads of various surface types and owners (see app. II). According to our analysis, BIA owns 20 percent (29,456 miles) of the existing road miles on tribal lands and the tribes own almost 12 percent (17,029 miles), leaving about 68 percent (100,796 miles) of the existing road miles under the control of state, local, and other entities. (See fig. 2.) In contrast, our electronic testing of NTTFI data on road-description and condition data, such as surface type, surface condition, and daily traffic count, found missing, inaccurate, and out-of-date entries. Despite these issues, FHWA—the agency responsible for the TTP budget—uses the NTTFI data for reporting and oversight purposes. For example, FHWA uses these data to report on the condition and use of tribal roads in its performance reports and annual budget justifications. In addition, BIA uses these data to generate information for its internal use, such as estimating construction costs to improve TTP roads. BIA officials said that these data were originally included in NTTFI to support TTP-funding allocations but are no longer used for this purpose. Nevertheless, BIA continues to collect these data fields from tribes and maintain existing data on these fields in the NTTFI. Federal standards for internal control state that to achieve agency objectives, management should (1) design information systems and related control activities including continuing to evaluate those activities for continued relevance and effectiveness and (2) use quality information. Given the data quality limitations we identified in our electronic testing and changes in program requirements, this raises questions about the continued need to collect these road- condition and description data because they are of limited use for reporting and oversight efforts. Several factors, described below, have affected the quality and usefulness of road-description and condition data. These factors include: (1) changes in the role these data fields play in funding decisions, (2) lack of clarity in BIA’s guidance to tribes for reporting these data fields, and (3) limited data-monitoring activities. According to BIA officials, road-description and condition data were originally collected to support TTP-funding allocations, but acknowledged that these data fields are no longer used for that purpose. This is in contrast to inventory data, described above, which continues to be used to identify roads eligible for TTP-funding. Specifically, prior to 2012, road- description and condition data fields were used in the funding formula to determine the distribution of tribal-funding shares. When road- description and condition data were used for funding purposes, missing, out-of-date, or erroneous data could pose a risk to funding decisions. Road-description and condition data collected after 2012 are no longer needed for this purpose, thus eliminating a key incentive for tribes—which are responsible for entering the data—to ensure the data are complete, accurate, and up-to-date. Federal standards for internal control state that management should design its information systems and related control activities to achieve the entity’s objectives including continuing to evaluate those activities for continued relevance and effectiveness. Although BIA officials acknowledged that changes in a regulation affecting how the data are used have contributed to the problem of outdated and unreliable data, they have not made changes to NTTFI data collection since its use in the funding formula was discontinued in 2012. BIA officials also noted that while they generally do not use NTTFI road- description and condition data for system-wide reporting, they do make this information available to FHWA, which has used it to report on road condition in its annual budget justification and its Conditions and Performance Report to Congress. While NTTFI road-description data are relevant for this purpose, it is unclear how useful the current data are for such a purpose given the results of our electronic testing. Collecting and maintaining road-description and condition data involves both tribal and BIA resources; however, until BIA can clearly define a relevant purpose for collecting these data, it is difficult to justify the continued collection of data that are not current, complete, or accurate. BIA’s guidance to tribes on how to “code” the data when entering it into NTTFI is unclear. This can result in inconsistent collection and outdated data, both of which can lead to inaccuracies when these fields are used for budget justification and performance reporting. For example, required NTTFI data fields pertaining to traffic counts (average daily traffic on major arterial roads) and surface condition (surface condition index) are outdated and may not be comparable across tribes. BIA’s guidance does not require data to be updated on a routine basis, and condition data is not required to be collected in the same manner by all tribes. In particular: Average daily traffic (ADT): ADT is a measurement of the amount of traffic that is using the road and, among other things, is intended to be used to: (1) determine the design standards to which a road should be built (such as whether the road surface should be gravel or paved); (2) manage road maintenance (such as determining which roads to maintain and what treatments to use); and (3) report on the number of vehicle miles being traveled (such as for analyzing road usage trends). Research and guidance on industry practices indicate that ADT on major roads is typically counted every 2 to 6 years. We found that BIA does not provide direction in its coding guide on how often to take traffic counts and most of NTTFI’s traffic counts for major arterial roads are between 6 and 12 years old. In particular, of the existing major arterial road sections in NTTFI—totaling 1,872 miles—none have had their ADT counted in the last 3 years, 0.3 percent (6 miles) have been counted in the last 4 years, 3.8 percent (72 miles) have been counted in the last 6 years, and 81 percent (1,517 miles) have been counted in the last 12 years. As a result, ADT information contained in NTTFI likely does not reflect current road usage and cannot reliably inform reporting or decisions related to design standards or maintenance management. Surface condition index (SCI): SCI is a measurement of road surface condition that can be used to identify and prioritize maintenance needs. According to industry guidance, road conditions are typically evaluated every 1 to 4 years because conditions deteriorate over time. There is no requirement in BIA’s coding guide to specify how often SCI should be updated, and we found that the SCI for about 85 percent (81,080 of the 95,510 miles) of existing paved and gravel roads (those which are required to be evaluated for SCI) could not have been updated in at least the last 4 years, and almost 50 percent could not have been updated in at least the last 8 years. Further, because the coding guide allows tribes to use any nationally acceptable method to rate a road, data may not be collected consistently from those evaluating the roads. As a result of outdated SCI data that is inconsistently collected, NTTFI lacks reliability for use in prioritizing TTP projects and making the most efficient use of resources. Further, FHWA’s use of SCI data may contribute to inaccuracy in its reporting on the overall condition of the system and whether it is improving or worsening. In addition to these specific limitations, the BIA coding guide—which provides guidance for those collecting and inputting data into the NTTFI— was last updated as a draft released in 2007 and contains outdated references. For example, the guide refers to the Indian Reservation Roads (IRR) Program—the program prior to the TTP. Moreover, in 2008, FHWA issued a review of the then-IRR Program. The FHWA’s review, among other things, found that the coding guide had conflicting, confusing, and ambiguous instructions or definitions. In its review, FHWA recommended that the BIA revise the guide to remove subjective interpretations and ambiguous directions to improve data consistency and accuracy. BIA has not updated the coding guide to address FHWA’s review and recommendations, but BIA officials stated that they have taken some efforts to improve the data. BIA officials acknowledged that outdated and inaccurate data exist within the NTTFI but noted that it is the tribes that are responsible for entering this information. BIA officials noted that tribes may have less incentive to update data fields such as ADT, SCI, and other road-description and condition data because, as noted above, this information is no longer used as a factor in determining the allocation of TTP funds to tribes. Federal standards for internal control state that management should use quality information. Moreover, these standards recommend that management design control activities, such as providing clear guidance to achieve their objectives. If BIA determines that it needs to collect these data to achieve the agencies’ objectives, it will not have assurance that the tribes can provide quality information on road use and surface condition until it can provide more clear guidance to them. While the NTTFI has some automated data entry checks for road- description and condition information, BIA does not monitor these fields for missing or conflicting data, resulting in persistently incomplete and inaccurate data. For example, we found that road-description and condition data associated with about 14 percent (22,000 miles) of existing and proposed road miles have not been updated since they were imported into NTTFI in 2004. In our analysis we identified conflicting data, indicating inaccurate information, as well as missing data for required fields. We found, for example: About 6 percent (8,630 miles) of entries pertaining to the 147,281 miles of existing roads are missing their required “functional class” code, which is used to determine the construction standard for the road, such as identifying the appropriate pavement type. Without complete functional class information on existing roads, it is not possible to know if the road is adequately constructed or needs to be improved when making funding estimates. Without this information on proposed roads, planning estimates of system-wide funding to construct these roads may be in error. Approximately 6 percent (9,553 miles) of entries pertaining to all roads have the “construction need” coded as “proposed,” but the required “existing surface type” is blank (i.e., not coded as “proposed”) making it unclear whether these roads are existing or proposed. Also, about 70 percent (9,553 miles) of the 13,380 miles of proposed roads are missing their required “existing surface type” code which should show them to be “proposed.” Accurate information in these fields helps ensure that agencies clearly know which roads are proposed and which are existing, knowledge that is essential for planning maintenance and construction and developing the costs for those plans. BIA officials told us that they are aware of these data errors, which they believe are primarily from data that were imported into NTTFI from the previous inventory system in 2004. These officials also noted that there is not a systematic reporting function to identify these errors and generate an error report to support correction efforts. While there are some automated checks on the system that tribes use to enter and update data in the NTTFI, they do not apply to data already in the system. With respect to data entry checks, for example, certain fields only accept a specific range of values for data entry, and some fields require documentation for entries that require BIA’s review before official inclusion into the NTTFI. These checks are intended to eliminate the possibility of entering incorrectly coded data or the inclusion of erroneous data. No similar error reporting or checks for compliance with expected values are applied to existing data. According to BIA officials, this is because they do not own the tribal data that is in the system they manage and therefore cannot make changes to the data once it has been accepted into the system. Tribes are required to update their data annually. Nevertheless, BIA has a stewardship responsibility to ensure the NTTFI’s data accuracy. Federal internal control standards state that information systems and related control activities, such as monitoring to identify missing or erroneous data, should be designed to facilitate timely and targeted corrections. Without complete and accurate road- description and condition data, BIA and tribes will be limited in their ability to assess the needs for the entire road system within their scope and identify TTP priorities. Having more complete and accurate data would also better support FHWA’s budget-justification and performance reporting. New TTP regulations recently went into effect that may mitigate some of the data errors we identified. These regulations require the tribes to, among other things, submit specified documentation by November 7, 2017 to BIA and FHWA for approval for all proposed NTTFI roads that currently exist in the NTTFI in order to remain in the inventory. According to BIA officials, this review and verification may correct some of the problems with the proposed road data in NTTFI. BIA officials told us that they are in the process of developing the details of how the review will work and what options they may be able to take to ensure the proposed road data are accurate, including possibly removing proposed road sections containing data errors from the inventory. The DMR system provides an inventory of BIA roads by location, length, and route that may be maintained with RMP funds. Separate from the NTTFI, BIA maintains the DMR system containing the inventory of BIA- owned roads eligible for maintenance funded by the RMP. DMR records consist of data on individual road sections with fields pertaining to the description, such as surface type, level of service, and maintenance needed, performed, and deferred of each section. We found the DMR data to be useful for identifying roads in the BIA inventory. Table 1 shows the distribution of these roads across the 10 BIA regions in which they are located; two BIA regions have no BIA roads. While the BIA roads are also in the NTTFI, the DMR database includes additional data, such as deferred maintenance, which is not in the NTTFI. In management of the RMP, BIA sets goals and reports on its performance; however, we found that some data in the DMR system— specifically, data on the current level of service (overall condition of the road), cost of needed maintenance, and amount of maintenance performed—may not be sufficiently accurate for BIA’s use in this reporting. This reporting includes assessing the amount of deferred maintenance for the BIA road system and reporting how BIA has met its performance targets for the RMP. BIA uses deferred maintenance to (1) quantify the amount of maintenance needed (in dollars) on BIA roads in Interior’s annual budget justification and (2) report on maintenance performance targets to the Indian Affairs Performance Management System, information that is found in BIA’s annual budget justification and performance information. BIA uses level of service data from DMR to calculate and report the percentage of miles of BIA roads in acceptable condition in the performance report. If the level of service data is in error, then the resulting performance reporting will also be inaccurate. As noted previously, according to federal internal control standards, management should use quality information to make informed decisions and in communicating both internally and externally. Controls to ensure that quality information is used include: obtaining relevant data (that are reasonably free from error) from reliable sources, obtaining that information on a timely basis, and processing that data into quality information that faithfully represents what it purports to represent. To determine the amount of deferred maintenance on BIA roads, BIA first calculates the maintenance needed by multiplying a unit cost of maintenance per mile, based on a road section’s level of service, by the length of the road section. However, we found that two of the factors— level of service and unit cost of maintenance—that BIA uses in its maintenance cost calculations may be unreliable, resulting in inaccurate estimates of maintenance needs. In particular: Level of service (LOS): LOS is a qualitative road condition rating (on a 1 to 5 scale) based on road surface, drainage, pavement marking, and other characteristics which change over time. BIA officials stated that every BIA road is evaluated on an annual basis. However, it is not possible to determine specifically when each road section’s LOS was last updated because the DMR system does not record this information. Without knowing when the LOS information was last updated, BIA does not have reasonable assurance that LOS data represent actual road conditions, or whether BIA is meeting its performance measures for the RMP. Unit cost of maintenance: Unit maintenance costs are used to identify the estimated annual costs of maintaining a particular road section. BIA develops unit costs per mile based on a road’s geographic location, surface type, and level of service to estimate the amount of maintenance needed for the entire road section. However, BIA officials told us that they had no formal documentation showing how the unit cost estimates were prepared. According to leading practices for cost estimation, one key step for ensuring high-quality cost estimates is to document the estimate that includes auditable and traceable data sources for each cost element. Because BIA does not document unit cost estimates, it cannot determine the reliability of the estimates’ sources or the quality of the estimate of maintenance needs based on their use. After determining the amount of maintenance needed, BIA subtracts the amount of maintenance performed as reported by BIA and tribes in the DMR from the needed maintenance to determine the amount of deferred maintenance. However, BIA officials stated that there is under-reporting of performed maintenance by some tribes. In particular, BIA officials noted that approximately 172 tribes have an agreement with BIA to administer the RMP and maintain BIA roads in their area, but only about 40 percent of those tribes report on their road maintenance activities which results in the DMR system having incomplete data on maintenance performed. BIA officials stated that they continue their educational efforts to stress the value of the collection and reporting of performed maintenance. However, these officials told us the reporting of maintenance performed at the road section level can be difficult because the maintenance work is not always performed at one specific road section, and it is challenging to allocate maintenance activities over multiple sections. BIA officials told us that they are considering alternative means of reporting the maintenance performed amounts to increase the completeness of the maintenance performed measure; however, they do not have a specific plan in place to address this issue at this time. Because BIA’s estimates of needed maintenance may be inaccurate and tribes’ reporting on performed maintenance is incomplete, calculations of deferred maintenance—the difference between estimated maintenance needed and actual maintenance performed—that support BIA’s budget submission and performance reporting may be similarly inaccurate. This lack of quality information can preclude Congress and agency officials from having a clear understanding of BIA road conditions and from making informed decisions about RMP priorities and funding levels. Based on our review of various planning and funding documents as well as interviews with selected stakeholders—including federal, state, local, and tribal officials—we identified funding constraints, overlapping jurisdictions, and adverse weather as some of the challenges faced in improving and maintaining roads on tribal lands. However, we found examples of collaboration among different stakeholders to improve coordination and resource sharing that helped mitigate some of these challenges. TTP annual appropriations fluctuated between about $424 and $441 million from fiscal years 2013 to 2016, and were less than FHWA’s budget request each year. (See fig. 3.) Federal, tribal, and other stakeholders we interviewed noted that constrained funding has limited the ability of tribes to improve and maintain roads on tribal lands and contributed to the deterioration of these roads. In addition, current funding levels have led to less frequent maintenance and improvement activities than desired, according to some tribal officials. For example, a transportation official from a Great Plains region tribe said that annual TTP funding allows for resurfacing their reservation’s 54-mile paved road network every 73 years, when the existing roadway network needs to be resurfaced at least every 20 years to maintain the roads in an acceptable condition. RMP funding has also remained relatively flat, at about $25 million per year from fiscal years 2009 through 2015, while the number of BIA roads eligible for these funds increased over this time period, from 26,868 to 28,859 miles. Over 85 percent of these BIA road miles are located on the lands of 59 tribes within six BIA regions. According to BIA and tribal transportation officials, RMP funding levels have not kept pace with the growing road maintenance requirements due to the addition of new roads, the need to address existing roads’ maintenance backlogs, and emergency operational requirements. For example, according to BIA and tribal officials, as much as 90 percent of some tribes’ annual RMP funds can be expended during the winter for snow and ice removal, leaving little for other road maintenance activities the remainder of the year. Also, according to these officials, the remoteness, rugged environment, and unavailability of materials on some tribal lands leads to comparatively higher costs for maintaining roads located in these areas, which further exacerbate funding constraints. Also, as roads fall into disrepair through the delay of or inability to fund road maintenance, the more expensive roads become to maintain. Deferring maintenance may result in greater future reconstruction expenditures. Most state and local transportation officials we spoke with said their agencies also face funding constraints that inhibit their road-improvement and maintenance efforts on tribal lands. Moreover, the amount of road maintenance funding expended on tribal lands has generally been less than the amount expended on similar roads in neighboring jurisdictions, according to BIA and tribal officials. For example, according to a 2008 analysis completed by a BIA Navajo Region official, counties bordering the Navajo Nation’s reservation receive about two times more road maintenance funding per mile to maintain county-owned roads compared to the road maintenance funding BIA receives to maintain its roads on the Navajo Nation. Over the past several years, BIA and tribal officials have tried to address road maintenance funding concerns by requesting additional RMP funding. In March 2016, Interior and tribal officials created a workgroup to analyze, record, and develop data for road maintenance budget needs. This group recently held discussions with BIA officials about establishing a new budget category for roads and transportation so that requirements such as road maintenance receive greater visibility in the budget. At the time of our review, discussions pertaining to this initiative were ongoing. Overlapping jurisdictions on tribal roads may create confusion and access issues that can delay or prevent road maintenance and improvement activities. Tribal lands may be owned by a tribe, an individual Indian, or a non-Indian. This varied ownership creates interspersed parcels or a checkerboard pattern of ownership on some tribal lands. According to federal, state, county, and tribal transportation officials, documentation of road ownership and rights-of-way do not always exist or are not always known, a lack that further complicates the ability of stakeholders to conduct road maintenance and improvement activities. Also, different, changing, or uncertain management responsibilities pertaining to roads on tribal lands that are owned by different stakeholders can make collaboration challenging as the decision-making on road priorities and funding sources are also dispersed. For example, two adjacent school districts within the Navajo Nation use many of the same roads for their school bus routes. These roads not only have multiple road owners but they also have different types of road surfaces—paved, gravel, and earth—which require different types of maintenance. (See fig. 4.) Because of differences in priorities among road owners, the amount of maintenance performed on the roads varies, leading to differences in road condition and potential impediments to transportation. Other challenges stem from changing roles and responsibilities, liability concerns, and differing approaches to meeting regulatory requirements, for example: Some challenges can occur as roles and responsibilities shift from federal to tribal control. For example, in 2013, the Navajo Nation signed a TTP Agreement with FHWA that changed the roles and responsibilities for both the BIA Navajo Region and the Navajo Nation. Under the agreement, the Navajo Nation assumed responsibility for conducting TTP work that used to be managed by BIA. However, according to federal and tribal transportation officials, it will take time for the Navajo Nation to build its capacity to assume the roles and responsibilities previously performed by BIA. According to a BIA Navajo Region official, they have been adjusting their capacity as their functions and duties diminish over time. As of December 2016, the BIA Navajo Region and the Navajo Nation were still in the process of transitioning operational roles and responsibilities, according to BIA officials. According to some federal and tribal transportation officials, tribal councils’ preferences or officials’ decisions may not always align with previously established plans and priorities. For example, in April 2016, an Oglala Sioux tribal official halted the development of a new gravel quarry location that was believed to be a sacred site. As a result, tribal transportation officials used the next nearest quarry, approximately 50 miles away, thus increasing road maintenance costs. Liability issues can halt or delay maintenance work. For example, the La Jolla Band of Luiseño Indians is located in a mountainous region of northern San Diego County, California, where rock falls are prone to occur. When rocks fall on a remote section of a state highway that runs through tribal lands, according to a tribal transportation official, the tribe must wait for state authorities to respond, even though the tribe has equipment that can remove the fallen rocks. According to this official, the state prohibits the tribe from conducting emergency maintenance work to avoid potential liability issues. As a result, local traffic can be blocked for extended time periods while waiting for state workers to respond. Differing approaches to compliance with the National Environmental Policy Act can affect delivery of maintenance and improvement projects. For example, according to a 2013 Department of Transportation’s Inspector General report, existing agreements between FHWA and BIA do not reconcile the two agencies’ different processes and requirements for National Environmental Policy Act approvals on TTP projects. According to some federal and tribal officials we spoke to, differences exist, in particular, in the process for acquiring a right-of-way for project construction. FHWA grants categorical exclusions in certain cases where tribes need to establish or amend an existing right-of-way while BIA requires tribes to prepare an environmental assessment for these cases, which is resource- intensive, according to federal and tribal officials. According to tribal officials, BIA retains right-of-way approval authority for projects on land it owns or holds in trust for tribes, and completing TTP projects at these locations results in additional time and cost. In November 2016, a final rulemaking included clarification that is expected to minimize or eliminate conflicts that involve differences in federal processes. The final rule specifies that FHWA’s categorical exclusions will apply to all qualifying TTP projects involving construction or maintenance of roads regardless of whether BIA or FHWA is responsible for overseeing the tribe’s TTP. According to various transportation and education officials we met with, adverse weather can exacerbate maintenance challenges on roads located on tribal lands. While adverse weather—such as drought, heavy rain, high winds, and snow—can negatively affect all areas, communities that are located in more geographically dispersed areas and have more variations in land topography along a vast road network can experience particularly difficult challenges. Further these officials said that these challenges can be more severe on larger reservations that have more earth and gravel roads. According to federal and tribal transportation and education officials we spoke with: prolonged droughts can result in nearly impassable roads due to sand dunes, rocky surfaces, and deep holes that non-4-wheel drive vehicles cannot traverse; heavy rains can lead to flash flooding and washing out of earth roads, cutting off communities from important access points; high winds can lead to dust storms causing traffic accidents and blockage of the only accessible road; and snowfall can lead to icy and muddy road conditions, creating deep ruts along a road and preventing access by rescue and other vehicles. According to federal and tribal transportation officials, after most adverse weather events, road maintenance workers are unable to quickly deliver assistance to some remote locations because unpaved roads may be impassible. In addition, workers are often unable to conduct necessary maintenance activities during and immediately after some weather events because they must wait to use the equipment until the adverse weather ends and the ground dries. Also, although federal and tribal transportation officials may have maintenance equipment located at different maintenance yards or prepositioned in strategic locations around tribal lands to address normal and emergency road maintenance needs, they said that the remote distances may still prevent immediate responses. These situations can isolate some people within their communities and away from essential services until emergency road maintenance can be conducted, according to officials. Compounding this challenge, officials said, is the lack of or limited access to telecommunications on tribal lands, limitations that can prevent tribal residents and public users from even communicating routine and emergency maintenance situations while they are in remote tribal lands. According to federal and tribal officials we spoke to, tribes that have collaborated in partnerships with federal, state, and local governments to complete road maintenance and improvement projects had overcome some funding, material, labor, and equipment challenges. Based on our site visits and interviews with various transportation officials, we identified selected examples of federal, state, local, and tribal collaboration. (See app. IV, table 5.) Below are three examples of larger coordinated, multi- partner road improvement and maintenance projects that we identified. In 2013, FHWA, BIA, Arizona Department of Transportation, and the Navajo Nation partnered on a $35 million emergency relief project to pave about 27 miles of BIA Route 20, which was an earth road, during the closure of a 23-mile stretch of U.S. Highway 89 after a landslide damaged a portion of the highway. (See fig. 5.) The highway closure caused the Arizona Department of Transportation to set-up a detour affecting travel to Page, Arizona, from points south. The detour (along Arizona State Highway 98 and U.S. Highway 160) affected hundreds of Navajo school students and was 45 miles longer than the direct route into Page along U.S. Highway 89. According to federal, state, and tribal officials through effective coordination, BIA Route 20 was paved in about three months and completed prior to the start of the school year so that students could benefit from a shorter drive on a better road surface. In 2014, the Navajo Nation Division of Transportation and Coconino County (Arizona) established a matching fund program whereby the county and the Navajo Nation each contributed $200,000 to maintain school bus routes in the area, among other projects. (See fig. 6.) The goal of the program was to implement minor drainage and surfacing projects on the roads maintained by the county. According to Coconino County officials, in addition to the increased road maintenance, the plan for this funding was to improve school bus route conditions, reduce road maintenance costs, and increase safety. Transportation officials also said the partnering enabled the Navajo Nation and the county to use maintenance funds more efficiently and focus on blading roads versus having to constantly repair roads damaged by winter and summer storm events. While partnerships have been effective in two cases described above, collaboration among stakeholders can be difficult and achieving beneficial outcomes can take time. For example, in the third case, the Hopi Tribe, Navajo Nation, BIA Hopi Agency, and Navajo County (Arizona) have been working together since 2009 to obtain funding for the Hopi 60 (Low Mountain Road) project. According to transportation officials, the road construction project would pave about 14 miles of BIA Route 60 of which about 11 miles are located on Hopi tribal lands and about 3 miles are located on the Navajo Nation. This BIA route is an earth road that is the primary school-bus route for multiple school districts. According to transportation officials, during adverse weather conditions, BIA Route 60 becomes impassible and causes drivers on Hopi lands to take a 106-mile detour along Arizona State Highway 264. These stakeholders partnered to submit federal discretionary grant applications in 2009 to obtain about $22 million and in 2015 to obtain about $29 million needed for this project but were not successful. Stakeholders continued to pursue funding and were recently awarded $1.5 million from the State of Arizona. According to county transportation officials, stakeholders plan to submit another federal discretionary grant application in 2017 to secure funding for the remainder of the project’s cost. Nationwide, Indian elementary and secondary school students are absent more than non-Indian students, according to our analysis of national data from two Department of Education (Education) surveys. Education administers one survey to all public school districts but not BIE schools, and the other survey goes to a generalizable sample of schools and students, including BIE schools and students. We found that Indian students’ higher rates of absences are evident at public schools serving mostly Indian students and at BIE schools, which would likely be on or near tribal lands. In a census of public school districts and schools taken during the school year 2013–14, the national chronic absence rate for Indian students was 23 percent per year as compared to the national average of 14 percent per year for non-Indian students, according to our analysis of one Education measurement of absenteeism. Education asked for the number of students in schools who were absent 15 or more days in the school year. Our analysis showed that this rate was higher at schools across the country where Indian students represented at least 90 percent of the students. In particular, we found that 28 percent of Indian students were absent 15 or more days at schools where most students were Indian, such as schools in districts we visited. According to a 2015 Education survey of students intended to measure academic achievement, Indian students in grade 8 self-reported being absent more than non-Indian students. (See fig. 7.) Likewise, this pattern applied to Indian and non-Indian students in grade 4. The survey asked students in grades 4 and 8 for the number of days they were absent in the last month. Grade 8 Indian students at BIE schools—which are generally located on reservations—at times reported being absent more than Indian students not at BIE schools. Specifically, the self-reported absences in grade 8 for “three or four days” in the last month and “more than ten days” in the last month were higher for BIE students, as compared to Indian students at other schools. In our literature review, we did not identify any studies on the role that road conditions have on student attendance in the United States, including for Indian students living on tribal lands. However, we found studies about developing countries that identified road conditions as one of several factors influencing student attendance. While these studies were not specifically about the United States or Indian students living on tribal lands, they indicate that poor road conditions can decrease school attendance and road improvements can increase attendance in certain contexts. For example, a 2010 study of Trinidad and Tobago found that road improvement increased student attendance by 16 to 18 percent, among other educational outcomes. In addition, a 2006 study of a program in Bangladesh to improve and maintain rural roads and markets reported that school participation, measured as the average percentage of school-age children in school, increased about 20 percent for boys and girls whose villages participated. A third study of rural Pakistan found that higher levels of community development were associated with significantly reduced likelihood of dropout in certain scenarios; the level of development included seven indicators, such as two indicators of whether a community had paved roads. According to literature we reviewed, there are many factors connected with student attendance. The factors that may be connected with school attendance and absences in the United States and other countries generally fall into four categories: individual, family, school, and environmental or community. Literature we reviewed has identified numerous examples of factors in these categories. (See fig. 8.) Road conditions are an example of an environmental or community factor that may be connected with school attendance. Attendance Rates and Earth Roads in One Navajo Nation School District At one district on the Navajo Nation, attendance rates in school year 2015-16 were lower for certain students on a few particularly challenging bus routes on earth roads. These routes are altered and truncated during adverse weather. District data showed these students’ attendance rate was about 91 percent, compared to the district’s 95 percent overall attendance rate. This difference in attendance rate—some of which may relate to road conditions—would be equivalent to about seven additional days of absences, according to district officials. Road conditions are one of the factors leading to absences for Indian youth on tribal lands, according to officials at all 10 local schools and districts we visited serving three tribes. Road conditions reflect the surface type and level of maintenance, among other things. On large reservations as with the three we visited, students may live far from school, and in many cases their residences and schools are only accessible by earth and gravel roads. At these 10 schools and districts, officials told us that adverse weather worsens road conditions on tribal lands and sometimes affects student attendance. Officials said that school-provided transportation—buses and sport-utility vehicles (SUVs)— are the most common means of student transportation. A few school and district officials said that certain students may not have alternatives to school-provided transportation to get to school, such as a ride from family, or that weather or road conditions may preclude students from getting to school on their own. Thus, when the school vehicle or the student cannot access the pick-up location due to road conditions, the student may miss part or all of the school day. For example, at one school we visited, the principal noted that students who lived far from the bus route (at least 12 to 20 miles away) have at times missed school, as families said they could not reach the bus stop due to impassable roads with mud or snow. Additionally, occasional bus breakdowns, such as getting stuck in the mud, can affect student attendance, such as arriving late to school, according to officials in one district on Navajo Nation. School and district officials also mentioned that school attendance was lower when they altered or halted school bus routes because of adverse weather conditions that compounded the already poor road conditions. Eight of 10 schools and districts we visited said that during adverse weather they sometimes kept schools open but altered or did not serve certain bus routes. School and district officials said that these changes to bus routes resulted in some students missing school that lived along those routes. When a bus route is truncated from its original route, students or their families often have to travel even farther than their regular bus stop to meet the bus, such as at a main road, travel that can affect attendance, because families have no way to reach the farther bus stops, for example. On the Pine Ridge Reservation, one school superintendent told us that certain families who live in remote locations along earth roads do not have 4-wheel drive vehicles to reach the farther bus stop when roads become muddy or snow-covered. Student absences can also result when school officials decide not to serve a bus route on a particular day. At a school in the central part of the Pine Ridge Reservation, school officials said that the school did not serve certain routes a few times in a year due to weather and safety concerns, such as heavy snowfall and icy road conditions. School officials told us that students on these routes could not get to school, and the school recorded their absences as excused. In addition to the three tribes we visited, officials we interviewed from BIE, Education, and other tribes told us that they heard similar concerns about challenging or impassable road conditions that affect student attendance. One tribal transportation expert said that the problem is particularly problematic for tribes with larger reservations due to the longer distances that people must travel and typically poor road conditions. Officials at schools and districts we visited mentioned a few strategies they sometimes used to try to mitigate the challenging road conditions and promote students’ access to school. However, at times, even these strategies did not allow students to get to school. For example, one school superintendent on the Pine Ridge Reservation in South Dakota noted that the school used SUVs on certain routes, but even its SUVs were unable to reach students due to the excessive snow or mud on earth roads for a total of about three to four days over the course of the school year. Additionally, as noted above, telecommunication challenges, such as limited or no internet access on tribal lands, affects the potential to use technology for virtual education. (See app. V for additional details on strategies used by officials of the three tribes we visited.) Guidance from the National Forum on Education Statistics, a body commissioned by Education, and other sources have stated that public school districts’ data collection on reasons for student absences is important to understand these reasons in order to take actions to ultimately increase attendance. The Education Department does not require school districts to collect particular data about reasons for absences, according to Education officials. Public school districts develop their own attendance systems, which may vary across districts, including reasons for absences. Nonetheless, the education forum provided non- binding guidance in 2009 on how school districts should develop attendance systems and document reasons for absences. Among other things, the guidance stated the importance of a comprehensive and manageable classification of student attendance, including reasons for absences. It suggested a series of reasons for absences for states and districts to consider, including transportation issues. Data on reasons for absences would then be helpful to inform interventions to increase attendance. Similarly, guidance jointly issued in 2015 by four departments—Education, Health and Human Services, Housing and Urban Development, and Justice—emphasized the importance of collecting and using absence data to improve attendance for those students who miss many days of school, including understanding reasons for absences. Three of the 10 schools and districts we visited—one BIE school and two public school districts— collected data on the number of student absences related to road and weather conditions. According to officials at these locations, road conditions leading to student absences typically were accompanied by adverse weather, such as heavy rainfall, snowfall, or strong winds. The percentage of absences at these three locations due to adverse weather and road conditions ranged from a fraction of 1 percent to 4 percent, according to the data. However, because parents did not provide a reason for the absence in many cases, the actual percentage of absences due to roads and weather may be higher. For the one BIE school that collected data on reasons for absences due to road and weather conditions, it decided on its own initiative to create a category for these absences. A school official said that weather-related absences generally were more likely to involve students who lived along earth or gravel roads. For example, due to snow, buses may not be able to reach students living along certain earth or gravel roads, or families may not be able to bring students to the bus stop. The official noted it is important for the school to know why students are absent in general, and how often students are absent, specifically, due to road and weather conditions in order to understand the extent that these conditions affect students’ ability to get to school. This information can help schools set priorities and target interventions depending on the extent of such absences. The other five BIE schools we visited did not collect data in a way that would capture absences due to road and weather conditions. Officials at two schools said that they recorded absences due to difficult road conditions as more general excused absences. For example, such absences were due to truncated bus routes or snowbound students who lived in remote areas accessible only by earth roads. At another school, officials did not seem aware of the ability to count and track a specific category of absences due to road and weather conditions, on a school- wide basis. According to BIE officials we spoke to, some schools may not collect absence data for road and weather conditions due to various circumstances such as school staff turnover, competing priorities among school attendance staff, or limited emphasis from BIE to collect data on these reasons for absences. Further, BIE has not provided guidance to its schools regarding capturing reasons for absences related to roads and weather. Documentation for the system used to collect absence data states that each absence should have a reason entered by the school. However, BIE has not provided instructions or suggestions to the 185 schools it funds to consider including road and weather conditions in their attendance system. For example, it has not issued a sample list of reasons that schools can use or tailor for local circumstances. According to BIE officials, BIE has not done so because it wants to give schools flexibility on which reasons they should collect on absences. However, BIE has not taken basic steps to facilitate optional data collection by schools that may be inclined to do so, such as those that are more affected by poor roads. For example, BIE’s existing attendance system currently provides a list from which schools can select reasons, or schools can create other reasons on their own. Road and weather conditions are not included as reasons on the existing list, and thus a school would have to create these reasons as causes for absences. In the capacity to provide technical support to schools, BIE could provide guidance to collect these data. Without such guidance, affected BIE schools as well as the Bureau itself will continue to lack insight into the effect of roads and weather on absences and the ability to target interventions accordingly. In addition, BIE and its schools do not have detailed information on this connection to identify patterns or trends or for discussion with federal, tribal, and other stakeholders, including on funding levels or road priorities. Road conditions, along with distances on large tribal lands and choices to enroll in farther schools, may contribute to increased transportation time and safety risks for students, and increased costs for schools and tribes. Officials from two schools expressed concern about the time of students’ bus rides and the long school days for children, including young children in elementary school. For example, on one of our site visits, we followed an afternoon bus route in dry weather that covered about 30 miles on mostly earth roads to drop off about 30 students, including elementary school students. The route’s duration was about 90 minutes. At times, the school bus drove about 5 miles per hour on the earth roads, such as when ascending inclines without guard rails or traveling on earth roads with large rocks or ruts. The school principal said that the earth roads take more time to travel and lengthen students’ time on the bus. At another district we visited, several routes were at least 100 miles one- way, according to a list of bus routes from the district. Road conditions on tribal lands may also present various safety risks to students and transportation staff. Some roads may have few or no sidewalks, shoulders, or guardrails, among other features, according to our observations and a tribal organization. For example, on the Pine Ridge Reservation, we rode on a school bus route with a gravel road that led to a wooden bridge, and both sides did not have guardrails. (See fig. 9.) The wooden bridge’s weight limit was nearly reached by the weight of the bus with students on it, according to a bus driver at the school. Further, school and district officials told us about challenges with vehicle maintenance due to road conditions, as described in further detail below. For example, a BIE school we visited in the Navajo Nation reported that about 43 percent of its bus miles were on earth roads. The school principal stated that additional vehicle maintenance—such as replacing tires, shocks, and other bus parts—resulted from the rough conditions on poorly maintained earth roads. Road conditions on tribal lands, including the surface type such as earth and gravel roads and the level of road maintenance, contribute to the wear and tear on vehicles, such as the school buses and SUVs that transport students daily. Although road conditions affect vehicle maintenance and thus overall transportation costs, BIE—which supplies federal funding for transportation to BIE schools—has not reviewed its formula in a decade to consider costs of vehicle maintenance or other possible factors. Poor road conditions can increase costs for vehicle maintenance and transportation overall. Research suggests that rougher road surfaces, such as unpaved roads as compared to paved roads, tend to increase the maintenance and operational costs for vehicles, including buses, depending on the levels of road maintenance and the design of the road among other things. According to information from a school transportation organization, road and weather conditions can have an impact on the frequency and cost of school bus maintenance. For example, in one school district we visited in the Navajo Nation, officials said that the school buses serving the part of the district with more earth roads accounted for the majority of the costs for vehicle maintenance, compared to the rest of the district, which had more total miles but fewer miles of earth roads. These increased transportation costs are consistent with our prior work on BIE school spending. Specifically, we noted the geographically dispersed locations and poor road conditions, including the vehicle- related maintenance, contributed to schools’ higher transportation costs per student, on average, for those on tribal lands, such as BIE schools, than the national average. In contrast to schools on tribal lands, we noted that slightly more than half of public schools nationwide are located in cities or suburbs, and therefore may be unlikely to face similarly poor road conditions or long bus routes. Officials from 7 of the 10 schools and school districts we visited told us about or showed us examples of wear and tear on school vehicles resulting from poor road conditions. For example, officials at two BIE schools on the Pine Ridge Reservation and a public school district on the Rosebud Reservation described how vehicles experience prolonged vibration caused by riding over the grooved surfaces that tend to form on earth and gravel roads (known as “washboard” roads). Vehicles traveling these roads require more frequent maintenance than those traveling on paved roads, according to these officials. Such safety-related maintenance work can include brake or oil changes, replacements of side mirrors or door and window parts, and repairs of windshields. (See fig. 10.) During rides on school buses or SUVs, we observed bumpy road conditions and the vehicle’s vibrating when driving over rough earth and gravel surfaces. According to district officials at one public school district that we visited on the Rosebud Reservation, their buses generally travel on gravel roads and typically have a life expectancy of about a decade. In contrast, school buses that operate under normal conditions which are generally on paved roads have a life expectancy of about 12 to 15 years, according to a report by a school transportation organization. BIE’s formula for determining amounts to allocate to BIE schools for transportation, which was formalized in 2005, does not distinguish between gravel and paved roads. The formula generally considers both gravel and paved roads as “improved” roads for funding purposes. The mileage on these “improved” roads plus an adjusted mileage (increased by 20 percent) on “unimproved” roads, which generally includes earth roads, determines a school’s transportation funding amount, subject to the available appropriation. When we asked BIE officials about the rationale for treating gravel and paved roads similarly from a funding perspective, they responded that the gravel helps to make the roads more passable in adverse weather, compared to other roads that do not have gravel or other materials applied. However, because BIE’s school transportation funding formula does not consider the likely higher maintenance costs for vehicles traveling on rough gravel roads, its allocation of resources may be misaligned with needs. Federal standards for internal control state that federal agencies should periodically review policies and related control activities for continued relevance and effectiveness in achieving objectives and addressing risks. However, BIE has not reviewed its transportation funding formula since 2005 nor has it implemented a recommendation we made in 2003 pertaining to the formula. Further, BIE and BIA officials said that they have not communicated in recent years about BIE’s transportation formula. For example, BIA transportation officials told us that they did not know that BIE was classifying roads using the terms “improved” and “unimproved,” which BIA officials said they no longer use. Further, BIE has not formally worked with tribes on its transportation formula since 2005. According to federal internal control standards, agencies should communicate with external parties when needed in order to achieve objectives. As a result of not communicating with BIA or tribes, BIE has not benefitted from technical expertise and experiences of BIA or tribes and does not know whether transportation funding is distributed in a way that reflects disparate maintenance needs. BIE officials said they understood the importance of aligning funding with transportation costs and said that funding formulas used by states may provide a good model for BIE to consider. Road conditions on tribal lands pose challenges in connecting people to education, employment, health care, and other essential services. These challenges are especially magnified during adverse weather because of the remote location of some tribes and the prevalence of unpaved roads that are prone to weather-related damage. Useful, accurate, and consistent data in the NTTFI and DMR system can support road management and program oversight efforts. However, the purpose for which NTTFI data are used has changed, in that, since 2012 updates to NTTFI, data have not been used as a determinant in allocating TTP funding to tribes. In addition, guidance to tribes for entering data into NTTFI is dated, and limited monitoring of data that are entered has resulted in missing or conflicting entries that affect the accuracy and completeness of these data. These conditions lead to NTTFI data on road descriptions and conditions that provide limited usefulness for management and program oversight purposes and raise questions about the value of maintaining the NTTFI as it is currently constructed. Similarly, DMR may contain potentially outdated level of service data describing road conditions. In addition, DMR may contain inaccurate data on maintenance needs because BIA does not document how it develops maintenance cost estimates and tribes under-report maintenance performed. As a result, reports and budget submissions that rely on these data may not accurately reflect road conditions or maintenance needs and associated costs. This can inhibit the ability of Congress and BIA management to make informed decisions about RMP priorities and funding levels for the BIA road system. Many factors affect student attendance, among them the condition of roads. BIE-funded schools vary in the data they collect and on the reasons for student absences. Expanded guidance to schools to collect such information would allow BIE to identify whether poor road conditions and adverse weather affect attendance to better target interventions. Poor road conditions also affect vehicle maintenance costs, which may not be fully addressed in BIE’s formula for funding student transportation. However, BIE has not recently reviewed its funding formula and does not know whether transportation funding is distributed in a way that reflects disparate maintenance needs. By working with BIA and tribes to revise the transportation-funding formula, BIE has the opportunity to consider how varying road conditions and other factors affect maintenance costs and best align its resource allocation in relation to current needs. We are making eight recommendations to the Secretary of the Interior. To help ensure that NTTFI is able to provide quality information to support management and program oversight efforts, we recommend that the Secretary of the Interior direct the Assistant Secretary-Indian Affairs to take the following three actions: coordinate with the FHWA and tribal stakeholders and reexamine the need for road-description and condition data currently collected in the NTTFI and eliminate fields that do not serve an identified purpose, for fields determined to have continued relevance for management and program oversight take steps to improve the quality of these data by clarifying guidance in the NTTFI coding guide that tribes use to collect data and by providing additional guidance on steps needed to ensure that data are consistently reported, and establish a process to monitor data to facilitate timely and targeted corrections to missing or erroneous data. To improve the DMR, we recommend that the Secretary of the Interior direct the Assistant Secretary-Indian Affairs to take the following three actions: develop a means to document when the level of service for each road section was last evaluated, develop and maintain documentation supporting the unit costs of maintenance used to estimate maintenance needs, and develop a process for more complete and accurate reporting occurring under existing authority of RMP funds expended for performed maintenance on BIA roads. To improve data on reasons for student absences, we recommend that the Secretary of the Interior direct the Assistant Secretary-Indian Affairs to provide guidance to BIE schools to collect data on student absences related to road and weather conditions. To best align resources allocation decisions to needs, we recommend that the Secretary of the Interior direct the Assistant Secretary-Indian Affairs to review the formula to fund transportation at BIE schools and determine, with BIA and tribal stakeholders, what adjustments, such as distinguishing between gravel and paved roads, are needed to better reflect transportation costs for schools. We provided a draft of this report to the Departments of the Interior, Transportation, and Education for review and comment. The Departments of Transportation and Education provided technical comments, which we incorporated in the report, as appropriate. Interior agreed with five of the eight recommendations in our report and described actions under way or planned to address them. Interior neither agreed nor disagreed with two of our recommendations and did not agree with one of our recommendations. Interior’s comments are reproduced in appendix VI. Interior agreed with our three recommendations for ensuring that NTTFI can provide quality information to support management and program oversight efforts. Interior said that eliminating fields that do not serve an identified purpose will reduce the large amount of missing and erroneous data and noted that it will take steps to improve the quality of data by updating the NTTFI coding guide. Interior agreed with two of our recommendations for improving DMR and disagreed with one. Interior agreed with our recommendation to document when the level of service for each road section was last evaluated. Interior noted it would take this action for roads and bridges that have been reconstructed or improved and for roads that have been evaluated at a condition level of fair or better since the last reporting cycle. Interior said that it is taking this approach because it believes improvement to level of service can only occur with reconstruction and not solely from road maintenance. This is a good first step towards addressing our recommendation. However, we continue to believe that Interior also needs to know the level of service and needs to periodically evaluate and document the evaluation date for all roads in order to effectively identify and prioritize road maintenance needs. Interior agreed with our recommendation to develop and maintain documentation supporting unit costs of maintenance used to estimate maintenance needs. Interior noted that it intends to take this action for tribes it directly serves, which we believe is a good first step towards addressing this recommendation. While we understand that tribes not directly served by BIA may not have to report documentation of maintenance costs, BIA should continue to obtain information from all tribes or other sources through other means that are available and document the unit-cost estimates for maintenance of all BIA roads. This will enable Interior to develop complete and reliable cost estimates for all BIA roads. Interior disagreed with our recommendation to improve the DMR by coordinating with tribal stakeholders to develop a process for complete and accurate reporting of Road Maintenance Program (RMP) funds expended for maintenance performed on BIA roads. Interior stated that this action cannot be reasonably accomplished as it conflicts with the intent of federal law and the minimum- reporting requirements when a tribal entity takes over the day-to- day actions and tasks of a program. In response to Interior’s concerns we have revised our recommendation to clarify that Interior should develop a reporting process that can be implemented with existing authority. We continue to believe that Interior can develop a reporting process for the RMP and could request tribes with self-determination contracts and self- governance compacts to follow such a process and could implement such a process for tribes that it serves directly. By coordinating with tribes and encouraging their self-reporting of RMP funds expended for maintenance as well as improving data collected on RMP activities that Interior administers, Interior can improve the reporting of maintenance performed on BIA roads and would be better positioned to provide Congress with more accurate and complete information for funding decisions. Interior neither agreed nor disagreed with our recommendations to provide guidance to BIE schools to collect data on student absences related to road and weather conditions and to review the formula to fund transportation at BIE schools and determine what adjustments are needed. Nevertheless, Interior stated that it will explore the addition of a field within its Native American Student Information System to capture whether an individual student's absence is due to inclement weather or road conditions. In addition, Interior noted that it does not have authority to make changes to the rule governing its formula to fund transportation without proper engagement in a consultation process with tribes, but said that it will take our recommendation under advisement. We continue to believe that these recommendations are important for BIE to implement. As previously noted, we recently placed Indian programs, including Indian education, serving Indian tribes and their members on our High-Risk Series. Given past and ongoing challenges, it is critical that BIE take action to enhance student access to school. By facilitating data collection on student absences related to roads and weather, BIE will be in a better position to understand the extent and consider strategies to address the effect of road and weather conditions on student attendance. Additionally, consultation with tribes is fully consistent with our recommendation on the transportation funding formula. By working with tribes and BIA on the transportation funding formula, BIE will gain critical knowledge and experience that will provide it the information needed to adjust a formula that has not been adjusted in a decade. We are sending copies of this report to the appropriate congressional committees and the Secretaries of the Interior, Transportation, and Education. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or members of your staff have questions about this report, please contact me at (202) 512-2834 or shear@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. We addressed the following objectives: (1) To what extent do the National Tribal Transportation Facility Inventory (NTTFI) and Deferred Maintenance Reporting (DMR) system provide useful data about road conditions on tribal lands? (2) What challenges, if any, do stakeholders face in improving and maintaining roads on tribal lands? (3) What is known about the connection between road conditions on tribal lands and school attendance as well as other aspects of school transportation? To determine the extent to which the NTTFI and DMR systems provide useful data on road conditions on tribal lands, we reviewed federal regulations, strategic plans, performance reports, agency reports, industry practices, guidance, policies, and system documentation pertaining to the collection, coding, and use of both databases; conducted electronic data testing, such as for completeness, out-of-range values, and logical inconsistencies; attended a training workshop on NTTFI data entry; and interviewed Federal Highway Administration (FHWA), Bureau of Indian Affairs (BIA), and tribal officials about the systems. We analyzed the NTTFI data as of September 2015—which was the most recent available data at the time of our review—and the quarterly DMR system inventory and road condition data for federal fiscal years 2009 through 2015. The most recent data available was the first quarter 2016 DMR data, however, we decided not to use it because we could not get full year data and we wanted to ensure that the date of the most recent DMR data matched the most recent NTTFI data we were able to obtain. To assess the usability of the data, we reviewed the results of our electronic testing, interviewed BIA officials regarding system controls (such as data system design, monitoring, and edit checks) and other processes (such as cost estimating practices) in place to promote data accuracy, consistency, and completeness. We compared the information about each data system design, monitoring, edit checks, and other processes to federal standards for internal control. We determined that these data were sufficiently reliable for some purposes, such as the road section’s location, owner, and road surface type (existing roads only) for the NTTFI, but not others, as described in the report. NTTFI data are part of BIA’s Road Inventory Field Database System (RIFDS)—a broader database of BIA managed roads. To better understand the overall system and data entry requirements, we attended a RIFDS training workshop that focused on the process of entering and deleting NTTFI data. The NTTFI data includes inventory, description, and condition data for all Tribal Transportation Program (TTP) eligible roads, bridges, and other transportation facilities in all 12 BIA Regions. Our review included only roads (including paths and trails). We conducted electronic testing of the following NTTFI data fields: Average Daily Traffic Year Existing Average Daily Traffic Surface Condition Index (SCI)/ Wearing Surface Condition To identify which road sections in the NTTFI are proposed and which are existing, we used two data fields—the Construction Need and Existing Surface Type fields. Road sections with either the Construction Need data field equal to “4” (proposed) or the Existing Surface Type data field equal to “0” (proposed) were classified as a proposed road section. Road sections with neither the Construction Need data field equal to “4” (proposed) nor the Existing Surface Type data field equal to “0” were classified as existing. If both of those data fields were blank, we categorized the road section as unknown. Our review did not include ensuring that the road sections in the inventory met the current statutory requirements for inclusion in the NTTFI, and we did not physically inspect roads to assess the accuracy of road section length or surface type entries. The DMR system includes inventory and condition data for all BIA roads in 10 of the 12 BIA Regions. There are no BIA roads in the Alaska and Eastern Oklahoma BIA regions, according to BIA officials, so these regions were not included in our assessment of DMR data. We conducted electronic testing on the following DMR data fields: To identify any challenges stakeholders face in improving and maintaining roads on tribal lands, we reviewed relevant federal laws such as the Intermodal Surface Transportation Efficiency Act of 1991; Transportation Equity Act for the 21st Century; the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users; Moving Ahead for Progress in the 21st Century Act (MAP-21); and Fixing America’s Surface Transportation Act. We also reviewed the federal regulation, guidance, and funding processes pertaining to the TTP and RMP. We reviewed both TTP and RMP program documentation including reports to Congress on the program’s performance measures and program goals. We analyzed FHWA budget justification data and BIA RMP funding data for fiscal years 2009 through 2016 to understand the annual level of funding for each program during those years. We also reviewed tribal transportation documents for conducting road improvement and maintenance, such as selected tribes’ program agreements (e.g., for TTP), lists of tribal transportation projects and priority lists, and various management plans. We interviewed FHWA and BIA headquarters and regional officials to gain a better understanding of the TTP and RMP programs and examine how the agencies coordinate with tribes to maintain and improve roads on tribal lands. We also interviewed federal, state, local, and tribal transportation officials on how they plan, prioritize, and coordinate road projects, address jurisdictional issues and National Environmental Policy Act requirements, and manage other factors affecting road maintenance and improvement on tribal lands. We conducted site visits to 10 selected schools and school districts and 7 transportation offices within the Navajo, Pine Ridge, and Rosebud Indian Reservations. The Navajo Nation is located in Arizona, New Mexico, and Utah and is within the BIA Navajo Region; the Oglala Sioux and Rosebud Sioux Tribes are located in South Dakota and are within the BIA Great Plains Region. While all 567 federally recognized tribes were considered for selection, these three sites were chosen because they reflect factors such as different BIA regions; considerable tribal and BIA road mileage; presence of Bureau of Indian Education (BIE) schools; and different program agreements. During our site visits, in addition to meeting with school, tribal, and transportation officials, we observed road conditions first-hand, including riding on school busses along their delivery routes. As part of one site visit, we conducted a facilitated group discussion with 10 tribes from the BIA Great Plains and Rocky Mountain Regions, including two tribes we visited. Our site visits provide information and illustrative examples on a range of road condition and student attendance issues on tribal lands but are not generalizable to all tribal areas. We also attended four tribal transportation-related conferences through which we met with various tribal officials. We also met with tribal technical assistance experts and representatives from national Indian associations such as the National Congress of American Indians, Intertribal Transportation Association, National Indian Education Association, and the National Indian Justice Center. Last, we obtained geospatial data from the Navajo Nation on road ownership, road surface type, and road maintenance partnerships for two school districts within the Navajo Nation. After analyzing the geospatial data and partnership information, we developed maps and provided those maps to the Navajo Nation and Coconino County (Arizona) for them to review our analysis and validate that we developed accurate maps. To determine what is known about the connection between road conditions on tribal lands and school attendance as well as school transportation, we used a variety of methods. We reviewed relevant laws, regulations, and guidance from the Department of Education (Education) and Department of the Interior’s BIE. To provide national data about student attendance including for Indians, we analyzed two Education data sets—the Civil Rights Data Collection for school year 2013–14 and the National Assessment of Educational Progress for 2015. For both data sets, we used the most recently available data and assessed reliability by reviewing related documentation and interviewing knowledgeable agency officials, among other steps. Based on these efforts, we determined that these data were sufficiently reliable for our purposes. We also interviewed Education and BIE officials on these issues and conducted a literature review of national and international academic studies written about factors that affect student attendance. Specifically, we searched for (1) connections between road conditions on tribal lands and school attendance in the United States and/or other countries; (2) connections between road conditions and school attendance in the United States and/or other countries; (3) factors connected with school attendance in the United States for Indian students; or (4) factors connected with school attendance, in general, in the United States. We identified peer-reviewed studies published since 2000 through searches in research databases, including the Education Resources Information Center (ERIC), Scopus, and WorldCat. We also reviewed a list of studies related to school attendance compiled and provided by the National Library of Education of the Department of Education. Based on our database searches and the list from Education, we reviewed abstracts and introductions of studies, and determined that a total of 39 sources were at least minimally relevant. We determined that 10 of the 39 identified studies were both methodologically sufficient and topically relevant to the research objective. The 10 studies examined factors connected with school attendance and absenteeism, which were generally grouped into one or more of four categories: individual factors, family factors, school factors, and environment or community factors, where road conditions and related issues, such as adequate public transportation, generally fell within the environment or community factor category. We used a data collection instrument to consistently record information about key findings related to the connection between road condition and attendance from each relevant source. Lastly, as part of our site visits with the three Tribal Nations noted above, we selected ten BIE schools and public school districts to visit on those reservations. We selected schools and districts with at least 50 enrolled students and similar student demographics—mostly Indian and mostly low-income—and with school bus routes of varying road surface types (i.e., paved, gravel, and earth). At these 10 schools and districts, we collected available information on attendance, school bus routes, and road conditions along school bus routes. We interviewed school and district officials, including superintendents, principals, transportation directors, business managers, bus drivers, as well as tribal community officials. Topics of these interviews and related data requests addressed reasons of student absences, conditions of roads serving the school, and changes to school bus routes due to road conditions, among others. We directly observed the road conditions on school bus routes by riding on or following behind school vehicles such as buses and sports utility vehicles. We compared this information with guidance from an education forum and federal standards for internal control. During our site visits, we took photographs and videos of road conditions on tribal lands, the equipment used to maintain and repair them, and vehicles the schools use to transport students on those roads. We also attended a group discussion with tribal and education officials of the Oglala Sioux Tribe at the request of a tribal education organization. The interviews and literature results are not generalizable across all tribal nations; nonetheless, they do provide qualitative and quantitative evidence on the connection between road conditions on tribal lands and student attendance. Tribal and other entities we interviewed or collected information from for all objectives are listed in table 2. We conducted this performance audit from December 2015 to May 2017, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Bureau of Indian Affairs (BIA) uses the National Tribal Transportation Facility Inventory (NTTFI) to document existing and proposed roads on tribal lands that are eligible for Tribal Transportation Program (TTP) funding. According to our analysis, we found that the NTTFI identifies over 147,000 existing road miles, over 13,000 proposed road miles for a total of about 161,000 miles of existing and proposed roads on tribal lands in the 12 BIA regions. Of the existing roads identified in the NTTFI, about 40 percent are identified as paved (concrete or bituminous), about 25 percent as gravel, and about 35 percent as either earth or primitive type roads like two-track or wagon trails. The majority of BIA- and tribal-owned roads are identified as earth or primitive while state- and local-owned roads are mostly identified as paved or gravel. Figures 11 through 13 include the rollover information for road ownership, surface type, and school bus route maintenance responsibility in Page and Tuba City Unified School Districts on the Navajo Nation in Arizona (Corresponds to Interactive Fig. 4). Below are selected examples of partnerships between federal, state, local, and tribal entities that primarily shared the costs to conduct road maintenance and improvements on tribal lands (see table 5). According to officials at the 10 schools and districts that we visited, they or others have used several strategies to lessen the effect of road conditions on tribal lands; these strategies aim to improve students’ access to attend school (see table 6). In addition to the contact name above, Mike Armes (Assistant Director), Aisha Cabrer (Analyst-in-Charge), Les Locke, Matt Saradjian, Irina Carnevale, Georgeann Higgins, Jeff Malcolm, Sara Ann Moessbauer, Melinda Cordero, Malika Rice, Cheryl Peterson, Jeanette Soares, Geoffrey Hamilton, Elizabeth Sirois, Jerry Sandau, Mitchell B. Karpman, David Blanding, Jr., Justin Fisher, Leia Dickerson, Melissa Bodeau, Benjamin T. Licht, and Jon Melhus made key contributions to this report. Jacques Arsenault and Theresa Perkins made key contributions to the multimedia for this report. | Roads on tribal lands are of particular importance for connecting people to essential services, such as schools, because of the remote location of some tribes. These roads are often unpaved and may not be well maintained. The federal government funds two programs to improve and maintain roads on tribal lands. BIA maintains the NTTFI and DMR databases to support these programs. GAO was asked to review condition and school-access issues related to roads on tribal lands. This report examines: (1) the extent to which the NTTFI and DMR systems provide useful data on these roads; (2) any challenges to improving and maintaining these roads; and (3) what is known about the connection between road condition and school attendance as well as other aspects of school transportation. GAO reviewed documents and relevant literature; analyzed road-inventory and student- attendance data; and interviewed federal, state, local, and tribal transportation and education officials. GAO visited three selected tribes, based on road mileage and presence of BIE schools, among other factors. The two databases maintained by the Department of the Interior's (Interior) Bureau of Indian Affairs (BIA) include some data fields useful for identifying tribal roads eligible for federal funding, but other fields may be too inaccurate to be useful for performance reporting and oversight. Specifically, the National Tribal Transportation Facility Inventory (NTTFI) provides useful data for identifying the roughly 161,000 miles of roads on tribal lands that are eligible for federal funding. However, the purpose for which these data are used has changed, and GAO found incomplete and inconsistent road-description and condition data, raising questions about the continued value of collecting these data. Similarly, BIA's Deferred Maintenance Reporting (DMR) system provides useful data on roughly 29,000 miles of BIA-owned roads eligible for federal funding, but GAO found inaccuracies in fields related to road-condition and road-maintenance needs. BIA does not document its road-maintenance cost estimates, and some tribes under-report performed maintenance. As a result, budget justification and performance reporting using these fields may not accurately reflect maintenance costs and needs. Federal standards for internal control suggest agencies design information systems and use quality information to achieve objectives. Funding constraints, overlapping jurisdictions, and adverse weather make improving and maintaining roads on tribal lands challenging. However, intergovernmental partnerships have helped mitigate challenges in some cases. For example, in 2013, federal, state, and tribal agencies partnered on a $35- million project to pave a BIA earth road on the Navajo Nation when the main U.S. highway was closed due to a landslide. By partnering, the agencies completed the project in about 3 months and prior to the start of the school year, eliminating a 45-mile detour. GAO's literature review and interviews with education officials indicate that road conditions can be a barrier to attendance, and Department of Education data show that Indian students have a higher chronic absence rate than other students (see fig.). At Interior, the Bureau of Indian Education's (BIE) schools generally do not collect data on transportation-related causes for absences, despite broader federal guidance that recommends doing so. BIE's attendance system lists causes, but transportation-related causes are currently not among them. Thus, BIE cannot quantify the effect of road conditions and target appropriate interventions. Rough road conditions in some areas also contribute to greater wear on school vehicles and associated higher maintenance costs. School Bus on the Navajo Nation (Utah) and the National Rate of Students Chronically Absent, School Year 2013–14 GAO is making eight recommendations including that BIA, in coordination with stakeholders, reexamine the need for NTTFI data and improve the quality of DMR data, and that BIE provide guidance to collect transportation-related absence data. Interior agreed with five of the recommendations, did not take a position on two, and disagreed with one. GAO continues to believe its recommendations are valid, as discussed further in this report. |
In 1989, the National Commission on the Public Service found that the federal government experienced difficulties in recruiting and retaining a quality workforce. The commission recommended that a student loan forgiveness program be established, and the SLR program was proposed in response to that recommendation. The reasons underlying enactment of the federal SLR program continue today and include the impending retirements of large numbers of federal workers and the difficulty, at times, in attracting the right individuals to public service to help fill the gaps. Today’s college graduates are entering the workforce with even more substantial education loans than in 1989, and studies indicate that educational debt prevents many graduates from choosing employers in which they are interested but that provide lower salaries. A 2002 Congressional Budget Office study concluded that federal employees in selected professional and administrative occupations tend to hold jobs that paid less than comparable jobs in the private sector. The report stated that the jobs that show the greatest pay disadvantage for federal workers make up an increasing share of federal employment. The provisions of the federal student loan repayment program legislation authorize student loan repayments as recruitment or retention incentives for highly qualified federal job candidates or current employees. In retention situations, however, the SLR program may be used only when an employee is likely to leave for employment outside the federal government, not to another federal agency. As mentioned previously, agencies are authorized to provide an employee with a maximum repayment amount of $10,000 per calendar year up to a total of $60,000, with the payments included in gross income for both income and employment tax purposes. An employee who separates voluntarily from the agency, who does not maintain an acceptable level of performance, or who violates any of the conditions of the service agreement becomes ineligible to continue to receive the benefit and must reimburse the agency for the total amount of any repayment benefits received. Under the law, student loans made, insured, or guaranteed under the Higher Education Act of 1965 or health education assistance loans made or insured under the Public Health Service Act are eligible for repayment. The SLR program legislation covers executive and select legislative branch agencies and government corporations such as the Pension Benefit Guaranty Corporation. Authorizing legislation also requires OPM to annually report to Congress on agency program use. According to OPM, the Department of Health and Human Services was the only agency to make a student loan repayment in fiscal year 2001. More agencies began using the program in fiscal year 2002, with 16 of them reporting to OPM that they had repaid some employees’ student loans. Participation increased again in fiscal year 2003 with 24 agencies distributing more than $9.18 million among a total of 2,077 recipients. During fiscal year 2004, 28 agencies provided 2,945 employees with a total of more than $16.42 million in student loan repayments. Compared to fiscal year 2003, this represents a 42 percent increase in the number of employees receiving the benefit and a 79 percent increase in the agencies’ overall financial investment in the program. As figure 1 shows, five agencies invested the most funding on student loan repayments in fiscal year 2004. These five agencies also made the greatest number of loan repayments. As with other human capital flexibilities, Congress has directed that agencies use the incentive strategically; therefore, some agencies may not need to make large numbers of student loan repayments to use the program effectively, or need to use the program at all to manage their workforces. GAO is one of the top five agencies accounting for most of the student loan repayments made in fiscal year 2004. GAO implemented its SLR program in fiscal year 2002 for employees who indicated interest and were willing to make a 3-year commitment to stay with the agency. The objective of the program is to facilitate the recruitment and retention of highly qualified employees by (1) providing an incentive for selected candidates to accept a GAO position that may otherwise be difficult to fill and (2) retaining highly competent employees with knowledge or skills critical to GAO. At the current time, GAO’s program is used mostly to retain top talent. The goal is to retain employees longer than 3 years, after which they are more likely to consider a longer-term career at GAO. The agency focuses on retaining recently hired staff because of the considerable time and effort expended on selecting these employees and the substantial amount of money required to train new hires who will replace retiring employees. The program’s operating plan specifies groups or categories of employees who will be considered for student loan repayment for retention purposes based on job series. Analysts and financial auditors, for example, generally received the same amount of loan repayment, $5,000 in fiscal year 2004. Employees in often hard-to-fill job series—such as economists and attorneys—are considered for GAO’s maximum loan repayment, $6,000 in fiscal year 2004, on a case-by-case basis. To help measure the effectiveness of its program, GAO distributed a survey to program recipients in 2004. More than 50 percent of respondents confirmed that the program had some influence over their decision to stay with GAO. Pending legislation in the House of Representatives and the Senate would exclude student loan repayments from gross income for federal tax purposes. The Generating Opportunity by Forgiving Educational Debt for Service bill would, in effect, increase the amount of the student loan repayment benefit by relieving federal employees of the obligation to pay income tax on the repayments their federal agencies have provided them. Those in favor of eliminating the tax argue that, with the current program, the federal government is taxing its own ability to recruit and retain employees. They also note that loan repayments made by educational institutions or nonprofit organizations to encourage public service are not counted as taxable income for the recipient. Legislation was also introduced but not passed in the last Congress to authorize a separate SLR program for federal employees in national security positions. The Homeland Security Federal Workforce Act would grant authority to the heads of selected agencies to establish a pilot SLR program to recruit or retain highly qualified professional personnel employed by their agencies in national security positions. This pilot program, which would remain in effect for 8 years, would be limited to agencies with national security responsibilities, namely national security positions in the Departments of Defense, Energy, Homeland Security, Justice, State, and the Treasury; the Central Intelligence Agency; and the National Security Agency. The proposed SLR program is similar to the existing one except that the legislation will authorize the appropriation of funding specifically for the loan repayments. However, actual funding of the loan repayments may be at the discretion of Congress via annual appropriations acts. The legislation also requires that, no later than 4 years after its enactment, the OPM Director report to the appropriate congressional committees on the status of the programs established and the success of such programs in recruiting and retaining employees for national security positions. DOS, DOJ, and SEC used the SLR program more extensively and primarily as a broad-based tool to retain more recently hired employees in specific positions that require knowledge or skills critical to the agency. GSA, DOE, and DOT, on the other hand, used it in on a case-by-case basis as an incentive to either recruit selected highly qualified candidates or retain employees with skills critical to the agency. Commerce recently started to offer repayments, also on a case-by-case basis, for recruitment and retention. At this time, SSA, EEOC, and SBA were satisfied with their efforts using other recruitment and retention tools and have not needed to use the program. DOS heads the list of federal agencies in the number of employees participating in, and funds expended on, student loan repayments. The department began using the program in fiscal year 2002 and reported making loan repayments for 734 employees in fiscal year 2004. Repayments totaled approximately $3.6 million. Officials from DOS noted that many of their recently hired employees have student loan debts. For example, most of the Presidential Management Fellows entering the department have eligible student debt, which automatically qualifies them for the benefit. DOS uses its program primarily to recruit current employees for foreign service hardship posts, and also to retain employees in civil service positions that are difficult to fill. The department has determined that offering the program to candidates who accept or remain in positions at the most difficult posts, such as those experiencing hazardous political or health-related conditions, helps attract candidates to seek these assignments or encourages employees to remain in them. Employees, or potential employees, in certain historically difficult-to-fill civil service occupational series may also qualify for the program. These positions range from those requiring historians with a Ph.D. in history to passport and visa examiners working throughout the country. While DOS primarily uses the program for retention, its recruiters also report that the SLR program is of great interest on college campuses across the country, thereby indirectly helping recruiting. The department noted that student loan repayments are only one of several incentives and benefits available to those considering a State Department career, but that the repayments are an important part of its overall benefits package. While DOJ made only one student loan repayment in fiscal year 2002, it began using the program extensively in fiscal year 2003. In fiscal year 2004, the department reported making 331 repayments totaling approximately $1.9 million, with the majority of payments made on behalf of attorneys, special agents, and intelligence analysts. DOJ’s use of the SLR program is unique in that there is a centrally administered departmentwide program for attorneys, as well as unit-run programs for a variety of other positions. According to the attorney SLR program officials, DOJ uses the program mostly to retain experienced attorneys. About 10 percent of the loan repayments is being used for recruitment, including qualifying new attorneys entering the department under the Honors Program. An attorney SLR program manager reported that DOJ advertises the program heavily to law students because it perceives the program to be an effective indirect recruiting tool. In terms of DOJ’s unit-run programs, 12 of its 16 components reported using the SLR program in fiscal year 2004, according to a DOJ human capital official. For example, the Bureau of Prisons found the program helped to retain highly skilled and experienced employees who would consider seeking employment in the private sector, as well as attract candidates who normally would not be interested in working with the agency due to the salary level. SEC, which began using the SLR program in the last half of fiscal year 2003, reported making 384 student loan repayments totaling approximately $3.3 million in fiscal year 2004. Most of these repayments were made on behalf of attorneys. According to SEC officials, the agency generally does not have trouble attracting job candidates, but it does have a relatively high attrition rate. An official remarked that the agency has a highly skilled workforce comprised largely of securities attorneys, accountants, and examiners, many of whom are highly sought after by the private sector, and it historically has been a challenge for SEC to retain them. SEC, therefore, uses the program only for retention. SEC officials said that thus far they have had only a few employees leave before the 3-year service agreement was completed. In addition, they reported that a large percentage of employees are reapplying for benefits, indicating their willingness to stay with the agency long enough to reduce or pay off their student loan debt. Although the program is used for retention, SEC advertises in its recruitment efforts that the benefit is available after 1 year of service, making it an indirect recruiting incentive. Officials noted that SEC also uses other recruitment and retention incentives, but uses those incentives on a strategic basis to recruit and retain highly qualified employees with qualifications critical to SEC’s mission. GSA units generally determine the use of incentive pay, including student loan repayments, on a case-by-case basis. GSA guidance on the program states that student loan repayments are not an entitlement, but rather a recruitment and retention incentive that may be used optionally by a manager who would not otherwise be able to recruit or retain a highly qualified employee with qualifications critical to GSA missions. An official noted that SLR authorizations are based on the particular recruitment or retention situation, whether the position is a critical need or difficult to fill, and the ability of the unit to fund the repayments. In fiscal year 2004, GSA repaid 17 loans at a total cost of approximately $93,000. The agency reported that it uses the SLR program for both recruitment and retention, although most of the repayments in fiscal year 2004 were for recruitment. GSA plans to increase its use of the program only if the number of critical vacancies increases and the number of available candidates decreases. DOE uses the SLR program on a case-by-case basis determined by factors such as labor market conditions that may affect recruiting efforts. Each case must be justified by the recommending official, concurred with by the respective financial and human capital staffs, and approved by a top manager authorized to grant the incentive. DOE reported spending approximately $87,000 on 36 repayments in fiscal year 2004 and using the program almost equally for recruitment and retention. Student loan repayments were offered to employees in a variety of different occupations, such as engineering and financial analysis. According to a DOE official, program use is expected to increase in incremental amounts annually for recruiting entry-level engineers and scientists, but not for retention purposes. Because DOE views the SLR program as more expensive than other incentives, managers are asked to be selective about their SLR offers. DOE has developed recruitment and retention worksheets to help managers determine the cost of a loan repayment compared to using other incentives, so they can evaluate the most strategic use of resources. DOT began using the program in fiscal year 2004 by making six loan repayments totaling approximately $53,000. Three of these were made on behalf of Presidential Management Fellows. The agency made the repayments for both recruitment and retention purposes. DOT officials speculated that the program will play a role in future hiring, as it appears to be a more valuable tool for entry-level employees who are more likely to have student loans. Agency officials also said that since DOT views the program as an expensive benefit and because the agency is now operating with a lower budget, they will use the program sparingly. Since repayment will be a targeted benefit, a human capital official noted that it probably will not be featured in the standard DOT recruitment materials or brochures. Commerce is planning to use the program to recruit and retain specific individuals in mission-critical occupations, such as statisticians. It recently reported offering its first student loan repayment to an applicant who turned it down because of the length of the service agreement. Commerce intends to use the SLR program for both retention and recruitment, depending on the needs of its units. For example, the National Institute of Standards and Technology, which needs technical staff, will most likely use it for recruitment, while the Office of General Counsel, with a high turnover rate for attorneys, will likely use it for retention. According to SSA officials, the agency has not needed the SLR program to recruit or retain staff. The agency meets its hiring needs through a national recruiting program and generally does not focus its recruitment efforts on individuals with highly technical or unique qualifications. Therefore, SSA is able to meet its hiring targets without extensive use of special incentives. When needed, officials said the agency has successfully used recruitment bonuses, retention allowances, relocation bonuses, and above-minimum salaries to recruit and retain highly qualified individuals for hard-to-fill positions. The officials believed that these other incentives provided recipients with greater flexibility to use their bonuses or allowances to meet their own needs, whether to repay student loans or for other reasons. The officials acknowledged, however, that if SSA cannot continue to successfully recruit or retain employees through its national recruiting program or the use of other flexibilities, they would reconsider their decision not to use the SLR program. According to agency officials, EEOC does not use the SLR program because of fiscal constraints and because the organization has qualities that attract and retain employees without the program. In addition, the agency has not used other recruitment and retention incentives recently. An EEOC human capital official noted that the agency has lost 350 employees in the last 3 and a half years and will likely lose more employees in the near future. Rather than having to use monetary recruitment or retention incentives, agency officials remarked that individuals are drawn to work at EEOC primarily because of the mission it pursues. On the basis of anecdotal evidence, they also believe that employees stay with EEOC to a large degree because of the positive work-life balance the agency offers them. According to SBA officials, the agency is doing very limited hiring and rarely needs to offer recruitment and retention incentives. SBA officials explained that the agency recruited 156 employees during fiscal year 2004 and was able to successfully recruit the desired talent without using the incentive. The officials further stated they were not aware of candidates not accepting a position at SBA because the agency lacked a SLR program. As SBA becomes more targeted in its recruitment activities, agency officials remarked that they will consider using the SLR program along with other recruitment flexibilities. To address needs unique to their organizations, agencies customized aspects of their SLR programs. Table 1 illustrates some implementation differences among our selected agencies. Agencies centralized SLR program operations at the department level to coordinate departmentwide needs or decentralized operations to their individual units to offer them needed flexibility. The agencies operating their programs centrally used the SLR program primarily as a broad-based retention tool, while the agencies running decentralized programs used student loan repayments on a case-by-case basis. DOS, for example, has a centrally operated and funded SLR program that serves the specific recruitment and retention needs of all units within the department, such as those of the Bureau of Consular Affairs. In contrast, DOJ runs both centralized and decentralized programs. For example, the DOJ attorney SLR program is centrally administered, although as of fiscal year 2004, the recipient’s unit agency had to bear the costs of the repayments. Starting in fiscal year 2005, almost 30 percent of the program costs are being paid centrally with the balance coming from the individual DOJ units that participate. DOJ units offering repayments to employees in a wide variety of positions operate and fund these programs. GSA, DOE, and DOT have decentralized programs. Managers in individual units nominate specific candidates or employees for participation in the program, and the units provide the funding for the loan repayments. DOE, for example, allows its units to implement their own programs, primarily because they have diverse needs, including different geographic labor markets. The National Nuclear Security Administration, an agency within DOE, issues its own human capital program requirements and guidelines, consistent with overall departmental human capital policy, and administers its own SLR program at its various sites and locations across the country. Agencies also varied the amount of the loan repayment, depending on the results they needed to achieve. For example, to make the benefit meaningful to its employees, SEC has repaid the maximum amount allowable of $10,000, unless the loan balance is less than that amount. DOJ, for its attorney SLR program, offers a maximum amount of $6,000 annually to attorneys with salaries below $74,000 to attract a broad base of individuals who otherwise may seek employment in the private sector. For attorneys with higher salaries, DOJ matches the recipient’s own annual repayment amount up to a maximum of $6,000. A DOS official said the department’s goal is to offer meaningful loan repayments to the largest number of individuals possible, so DOS has repaid the same amount for all eligible employees, which for the past 3 years has been $4,700. If a recipient’s outstanding loan balance is less, DOS repays the lower amount. Agencies varied the length of time employees were required to wait before becoming eligible for the SLR program depending on results they were trying to achieve. For example, the DOJ attorney SLR program has no longevity requirements. Attorneys may apply during the first application period following their employment. Officials are concerned that they could miss opportunities to hire highly qualified law students with large student loan debts, who may be unable to accept DOJ’s entry-level positions because of economic concerns. Officials said the application process is self-nominating, and an attorney must have a qualifying student loan debt base of at least $10,000 to be eligible for the program. SEC officials said the agency has few problems attracting employees but historically has had challenges retaining them, often because SEC experience makes employees very marketable in the private sector. The agency has tailored its program participation criteria to address this need by requiring employees to complete at least 1 year of employment with SEC before they are eligible for the program. With the 3-year service agreement, SEC then has the potential to retain employees for at least 4 years, which also helps to ensure a greater return from recruitment and training costs. Officials from agencies using the program agreed that certain changes, such as more automation of the application and loan repayment processes and consolidation of some other program activities, would help to improve the program’s administration. Several officials also suggested ways they believed would increase the program’s effectiveness by making it more attractive to candidates and employees, such as reducing the length of the service agreement. As for assessing the results of their programs, agencies did not yet have processes in place to gauge long-term effects on their recruitment and retention efforts. Officials from agencies with SLR programs did note several indicators they plan to use, and suggested that anecdotal evidence indicates employees value the SLR program. They stated that since the program is relatively new, they did not yet have enough data to track long-range statistical trends that would help them measure program results. Nevertheless, it will be important for these agencies to establish, up front, goals for their programs, a recruitment and retention baseline from which they can monitor changes that result from the program, and the data they will collect to measure these changes in order to assess long-term effects. While the agencies using the program believe it is a useful tool, officials characterized it as cumbersome to administer. Human capital offices generally administer the program and are performing some tasks and activities that are uncharacteristic of their function and unique to the program. Program administrators, for example, must interact with a large number of lending institutions, verify loans, and, at times, act as collection agencies. An official from DOS remarked that, aside from the Department of Education, which administers student loans, there are few federal workers who have knowledge of the student loan business. Therefore, agency staff must develop expertise and establish and modify procedures to operate the program. The official noted that for the 734 DOS employees who received the loan repayments in fiscal year 2004, the department made almost 800 individual transactions to 55 different lending institutions. The agencies were either not tracking administrative costs associated with operating the program or were just starting to track them. The agency officials said they were absorbing the time and costs associated with the program into their regular operations. Agency officials reported that processing loan repayments involves many steps that can include time-consuming complications. SEC officials, for example, said their entire administrative process, prior to actual payment distribution to the various lenders, can take more than 3 months. This process involves steps such as verifying that the employee has a loan eligible for repayment, verifying the amount of the outstanding loan balance, and eventually, ensuring that the loan repayment is applied to the correct outstanding loan. SEC officials also noted that its payroll provider cannot make electronic transfers of loan repayments, requiring them to issue paper checks that are burdensome and sometimes applied to the wrong account. Furthermore, the Department of Education, one of the largest student loan lenders through its Direct Loan Program, is unable to accept electronic transfers of funds from agencies for loan repayments. According to an Education official, they are looking at ways to collect direct loan repayments electronically. Other complications included processing repayments for employees who have loans with multiple lenders, distinguishing private loans that are not eligible for the program from federally guaranteed student loans, and having recipients supply incorrect addresses for their lenders. In addition, officials said that administrative problems with the various payroll providers, who process the loan payments, were a concern. A DOT official, for example, said they were using a payroll system that was being phased out through OPM’s e-payroll initiative. The official remarked that it was costly for DOT to incorporate the loan repayments into this outdated payroll system, causing the agency to experience delays in implementing the program. An official at DOE said its payroll provider had been unable to provide biweekly loan repayment options until recently. Officials from most of the agencies using the program suggested ways to help administer the program more efficiently, primarily through more automation and consolidation of activities. SEC human capital officials said that automation of SLR program activities, such as the ability to make electronic fund transfers for all repayments, would make the process far easier. They also suggested implementing an electronic signature to help expedite the SLR application process and recommended that some of the responsibility for making the program operate more smoothly be shifted to SLR recipients. For example, SEC requires recipients to provide verification to the human capital office that their loan repayments were applied correctly. In addition, SEC officials estimated that about 1 month of their processing time could possibly be eliminated if each of the various lenders had one designated representative to work with federal agencies on resolving loan repayment problems. A program manager at DOS suggested creating a central database of student loans and student loan lenders to assist with processing steps such as verifying the correct names and mailing addresses. A human capital official at DOE said OPM should require payroll service providers to use processes for student loan repayments similar to those used for other incentives, such as recruitment bonuses. An official at DOT indicated that alternative approaches could be explored to increase the cost effectiveness of administrative functions for agencies that use the program extensively. For example, one approach may be to create shared services, similar to the approach used to provide payroll services, wherein a small number of agencies service multiple agencies. Finally, agency officials suggested that more sharing of best practices with other federal agencies experiencing similar challenges would help with implementing the SLR program. DOS and DOJ officials said they consulted with each other about whether to centralize or decentralize their programs and shared program document templates. This type of collaboration could help agencies beginning to implement the program avoid some of the growing pains experienced by the current user agencies. DOJ’s attorney SLR program, in particular, found a number of ways to increase its program’s efficiency. For example, DOJ maintains a Web page that is updated regularly to make the SLR process transparent to applicants and inform all eligible attorneys about the program. The department credited the Web page with reducing the need to respond to questions about the program. In addition, DOJ standardized the application process for the attorney SLR benefit by posting request, validation, and review forms on its Web site in form-fillable versions. The department also credited a process that requires applicants to submit a valid, signed service agreement at the time of application for expediting the repayment process. The presigned service agreement includes a release authorizing loan holders to discharge financial information to the department for loan validation at the same time it eliminates the need for the department to secure service agreements after selections are made. DOJ’s attorney SLR program also reported learning it could reduce administrative burdens by only validating loan information for the attorneys actually selected to receive SLR benefits. While agency officials could suggest ways to improve the program’s administration, individual agencies may find it difficult to design some of the program improvements for themselves, and some of these changes could be more beneficial when implemented governmentwide. For example, it may be more effective to automate portions of the repayment process for all user agencies, rather than have each agency individually pursue this. Likewise, the President’s Management Agenda calls for the federal government to “support projects that offer performance gains that transcend traditional agency boundaries.” Sharing services across agencies for specific SLR administrative activities may present an opportunity for program managers to purchase human capital services from specialized providers, such as they currently do for payroll services, thereby reducing costs through economies of scale and freeing their staff to focus on more strategic rather than administrative activities. In prior work, we identified similar opportunities for agencies to use alternative service delivery (ASD) for a range of human capital activities, and recommended that OPM work with the CHCO Council to promote the innovative use of ASD. OPM, in written comments, agreed with this role. Agency officials identified several program characteristics they believe impede the program. Likewise, OPM’s fiscal year 2004 report to Congress on the SLR program noted common impediments. Of the barriers agencies reported to both GAO and OPM, the most frequently cited included difficulty in funding the program, the tax liability associated with the repayments, and the length of the required service agreement. A DOE human capital official, for example, remarked that factors, such as detailing its employees to Iraq, have created more competing budget needs within units; in one case, a unit wanted to use the incentive but determined it could not commit to SLR payments because of the cost of overtime premiums for detailed employees. In addition, on the basis of comments they have received from program recipients and candidates who decided not to participate in the program, officials from several of the agencies we reviewed remarked that eliminating the tax liability and reducing or prorating the service agreement could make the program more attractive. For example, officials from four agencies felt that eliminating the tax liability on loan repayments would make the program more attractive to candidates and recipients, and therefore, more effective. Currently, after withholding income and payroll taxes, the actual repayment amount applied to the employee’s loan is only about 62 percent of the total payment. According to officials, this diminishes the program’s value and makes it a less attractive incentive. Additionally, because the repayment is taxable, an official noted they can never completely pay off a recipient’s loan. A DOS official also remarked that many of the questions they answer about the program concern the tax liability issue. As mentioned previously, legislation is pending in Congress that would exclude loan repayments from gross income for federal tax purposes. In testimony on a previous draft of this legislation, we stated that the legislation had merit, would help to further leverage existing SLR program dollars, and would help agencies in their efforts to attract and retain top talent. The loss of revenue from this change, however, would need to be balanced against other pressing federal budget needs. Agency officials had varying views about the service agreements. For example, DOE officials suggested that the service period should be comparable to other recruitment and retention incentives. OPM regulations state that recruitment bonuses, for instance, require a minimum service period of 6 months. The DOE officials suggested that when the SLR benefit is used for recruitment, a minimum of a 6-month commitment would also be appropriate. Along the same lines, an SEC official remarked that employees felt the repayment should be prorated if they left the agency before their 3-year commitment is fulfilled. On the other hand, a DOJ official not in favor of reducing the length of the service agreement thought the 3-year agreement retained employees for an appropriate time and that enough flexibility in waiving the agreement was present to avoid situations that might be unfair to some recipients. Agencies using the program had not yet established processes to measure the extent to which the SLR program was helping them to meet their recruitment and retention needs. Agencies need such measurements to help them determine if the program is worth the investment compared to other available human capital incentives, such as recruitment and retention bonuses. Agencies are tracking the extent to which employees comply with, or do not complete, the terms of their service agreements. Several officials remarked that almost all employees are completing their terms of service, indicating the program is helping retention, at least in the short term. Agency officials did report that based on anecdotal evidence, they believe the program helps to make their agency more attractive to potential job candidates and helps them retain high quality employees. A GSA official said that, although it has not surveyed employees formally, informal feedback from them about the program is positive, and GSA managers using the program report being able to fill their positions with candidates who have the qualifications desired. An SEC official noted that the program appears to be attractive to prospective hires because the agency receives numerous inquiries about how the program works. DOS recruiters also report that one of the questions frequently asked by those considering federal service is the level of the department’s assistance in paying off student loans. When asked about ways to measure the program’s long-term effects, officials from several agencies suggested tracking the attrition rates of program recipients as one measure. However, the officials noted that to do so, they would need to monitor attrition rates for at least 3 years, since recipients sign a 3-year service agreement and relatively few leave during this time. Monitoring the number of employees who resign after the agency repaid their loans could indicate whether recipients were working for the agency just long enough to have their student loans repaid. Fiscal year 2006 will be the first year a substantial cohort of federal employees would have completed the minimum 3-year service requirement. In addition, a DOJ official believed that reviewing the attrition rates and career paths of its Honors Attorneys participating in the program would be helpful, since these are generally highly sought-after individuals. Thus, if DOJ’s attrition rates decline, this could indicate that the SLR program is having a positive impact. DOJ is also adding questions to its honors program application about awareness of the attorney SLR program and whether it influenced the applicant’s decision to apply. Recognizing that agencies in some cases will need multiyear data to measure the SLR program’s long-term effects, it is nevertheless important that agencies using the program decide on and put in place program goals and methods to track indicators of success when they implement the program. This will help them to establish an initial data baseline they can use to track changes as a result of the program, determine what data they should collect over time, and begin to collect that data. In addition, agencies would not have to wait to implement other options for monitoring program effects. For example, several agency officials noted that they will use employee survey data or responses from exit interviews to gauge how much impact the SLR incentive had on employees’ decisions to join or stay with the agency. Agencies could conduct such surveys and collect these data now or when initiating their programs, and periodically over time, as an indicator of program results. We recognize that gauging the program’s direct effect on recruitment and retention trends may be difficult because student loan repayments are not likely to be the only major factor in an employee’s decision to join or stay with an agency, although the incentive may help to tip the scale in the agency’s favor. Other factors, such as labor market conditions, could also affect these decisions. In prior work, we have described similar difficulties federal managers face in developing useful, outcome-oriented measures of performance and proposed that agencies collaborate more to develop strategies to identify performance indicators and measure contributions to specific outcomes. We also recognize that OPM and the CHCO Council could help to facilitate this coordination. As the President’s agent and adviser for human capital activities, OPM’s overall goal is to aid federal agencies in adopting human resources management systems that improve their ability to build successful, high- performing organizations. Likewise, legislation creating the CHCO Council highlighted the importance of agencies sharing information and coordinating their human capital activities, and we have reported that the CHCO Council could help facilitate such coordination. OPM has taken a number of steps to provide agencies with information and guidance on the SLR program. For example, OPM posts informational materials on its Web site including a fact sheet, applicable laws and regulations, questions and answers, sample agency plans, and OPM’s annual reports to Congress about the SLR program. In its fiscal year 2004 report to Congress, OPM reported more extensively on agencies’ experiences with implementing the program than it had in previous years. For instance, the report included information on the barriers agencies faced in implementing the program and whether agencies were using specific metrics for measuring program effectiveness. In September 2004, OPM held a focus group to explore whether the agency is a good source of program information and what types of problems agencies are typically encountering with the program. According to OPM, the focus group included representatives from several agencies using the SLR program. These representatives shared successes with the SLR program, obstacles they faced in using it, and suggestions for program improvements. Agency officials’ comments about OPM’s assistance were mixed. DOS officials said they consulted with OPM in the early stages of their implementation process, but DOJ officials reported they had not requested assistance from OPM. SEC officials noted that while their contact with OPM had been limited, they would have liked more concrete answers to their detailed questions involving program implementation. DOT officials see themselves as having primarily a reporting relationship with OPM. A DOE official commented that OPM has been a strong advocate of the SLR program, providing the guidance the agency needed to implement it. Nevertheless, a number of these officials suggested that more coordination across the agencies using the program would be helpful, and OPM may be in the best position to do this. As we previously highlighted, agency officials pointed to the need to partner with other agencies to find more efficient ways to implement their SLR programs. They said some improvements would involve sharing information more readily, such as ways to tailor the program to fit their particular needs, as well as easing administrative burdens associated with the program. Given the range and cumbersome nature of the activities involved in operating the program, officials said they could use help in identifying improvements to the program. For example, OPM, working with the CHCO Council, could sponsor additional forums, an interagency working group, or even training sessions, to encourage information sharing. One topic for these forums and this collaboration could also be developing measures of program effectiveness. OPM itself, in its most recent report to Congress on the SLR program, stated that an agency challenge has been to determine appropriate measures. By helping agencies address this challenge, OPM could help to determine if there is a common subset of measures or indicators that agencies could track and report to OPM to assess the SLR program’s impact governmentwide. Federal agencies have a large degree of discretion in structuring SLR programs to meet their unique needs, and the SLR program shows promise as an effective tool for attracting and retaining the talent needed to sustain the federal workforce. The federal government faces potential workforce problems now and in the years ahead, including the fact that its employees are retiring in greater numbers. Therefore, recruiting and retaining a new wave of talented individuals, who view the federal government as an employer of choice, is imperative. To address how best to meet this human capital challenge, agencies will need to be able to identify and select the recruitment and retention incentives that are most appropriate and effective for achieving this goal. In addition, to make the most effective use of monetary incentives such as the SLR program, streamlined and efficient administrative processes for implementing such programs need to be in place, and decision makers need concrete evidence that such programs are achieving agency and overall federal workforce goals. OPM, working with the CHCO Council, may be in the best position to help agencies work together to identify potential SLR program changes and then determine the most cost-effective ways to implement them. If the program continues to grow, making it easier to administer will help ensure agencies make maximum use of available funds to recruit and retain key talent, so critical in a time of fiscal constraints. Likewise, OPM and the CHCO Council could build on efforts to date and continue to facilitate coordination across agencies, in particular helping them to determine what data to collect and assess as indicators of the program’s results. In addition, OPM may be able to better report to Congress on the impact of the SLR program governmentwide if it works with the agencies to determine if there is a subset of common indicators all agencies could annually track and report to OPM. Consistent with OPM’s ongoing efforts in this regard, we recommend that the Director of OPM, in conjunction with the CHCO Council, take the following actions to help improve the SLR program’s efficiency and ease of administration, and to assess results: Working with the agencies, determine where program streamlining and consolidation of agencies’ administrative tasks are most feasible and appropriate, and design ways to implement these program improvements, especially those that could be implemented governmentwide and the most cost-effective ways to implement them. Examples of program improvements that could provide valuable help to agencies and ease the administrative burden include creating a central database of student loan lender information and establishing a shared service center arrangement for student loan repayments. Continue and expand on its efforts to provide agencies assistance and to help facilitate coordination and sharing of leading practices by, for example, conducting additional forums, sponsoring training sessions, or using other methods. Help agencies determine ways in which they can monitor long-term program effects on their recruitment and retention needs, such as determining data to collect and use as indicators of effects. This, in turn, could provide a consistent set of governmentwide indicators that would allow OPM to assess, and report to Congress on, the program’s overall results achieved. In addition, with respect to the selected agencies using the SLR program most extensively, we recommend the following actions: The Secretary of State: Build on current efforts to measure the impact of DOS’s SLR program by determining now what indicators DOS will use to track program success, what baseline DOS will use to measure resulting program changes over time, what data DOS needs to begin to collect, and whether DOS could use periodic surveys to track employee attitudes about the program as additional indicators of success. The United States Attorney General: Build on current efforts to measure the impact of DOJ’s Attorney Student Loan Repayment Program by determining now what indicators the department will use to track program success, what baseline DOJ will use to measure resulting program changes over time, what data DOJ needs to begin to collect, and whether DOJ could use periodic surveys to track employee attitudes about the program as additional indicators of success. The Chairman of the Securities and Exchange Commission: Build on current efforts to measure the impact of SEC’s SLR program by determining now what indicators SEC will use to track program success, what baseline SEC will use to measure resulting program changes over time, what data SEC needs to begin to collect, and whether SEC could use periodic surveys to track employee attitudes about the program as additional indicators of success. We provided a draft of this report to the Director of OPM, the Secretary of State, the Attorney General, the Chairman of SEC, the Administrator of GSA, the Secretary of Energy, the Secretary of Transportation, the Secretary of Commerce, the Commissioner of SSA, the Chair of EEOC, and the Administrator of SBA. OPM, DOS, DOJ, and DOE provided written comments on the draft report, which are included in appendixes III, IV, V, and VI respectively. SBA provided a comment on the report via e-mail and the Director of the Office of Human Resources Management stated, on behalf of the Secretary of Commerce, that it concurred with the report. SEC, DOT, SSA, and EEOC provided technical comments, and where appropriate, we have made changes to the report to reflect all of the agencies’ technical comments. GSA reported that it had no comments on this report. The following summarizes significant comments provided by the agencies. OPM generally agreed with the recommendations and stated that it will continue its efforts to promote effective human capital strategies and, as part of these efforts, will work with the CHCO Council to improve the administration of the SLR program and facilitate the sharing of best practices to improve program efficiency. OPM also stated that it would assist the agencies in establishing data requirements for tracking the use of student loan repayments and noted the agency anticipates a greatly improved ability to track and measure the success of the SLR program. DOS fully supported the recommendations and stated that it looks forward to working constructively with OPM to identify possible areas of program consolidation and to share best practices. The department reported that it is committed to establishing additional program indicators this year and is aware of the need to measure and track the impact the SLR program has had on both civil and foreign service recruitment and retention efforts. DOJ did not express an opinion about the report or the recommendations but stated that the department has already started to develop ways to measure the impact of the attorney SLR program on attorney retention. DOJ also emphasized that it will most likely take a number of years of data collection before it accumulates sufficient data to provide meaningful statistics. DOE stated that the report did not fully describe the efforts of OPM in assessing program implementation as part of its annual reporting process to Congress. We added language in the report to more comprehensively characterize what OPM included in its most recent report. DOE also suggested that GAO recommend that OPM assist agencies in measuring the effectiveness of specific student loan repayment, recruitment, and retention incentives by including questions in the Federal Human Capital Survey. While this may be a feasible and effective approach to collecting data on program results, we did not prescribe the methods OPM should develop or use to measure the effectiveness of the program, but instead recommended that OPM work jointly with the agencies and the CHCO Council to devise these means. SBA said that the agency will periodically monitor the use of the program in other agencies through the CHCO Council so that should the need arise, SBA will be in a position to implement the best aspects of other agencies’ programs. We are sending copies of this report to other interested congressional parties, the Director of OPM, the Secretary of State, the Attorney General, the Chairman of the SEC, and the heads of the other federal agencies discussed in this report. In addition, we will make copies available to other interested parties upon request. This report also will be made available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or larencee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of the report. Other contributors are acknowledged in appendix VII. The objectives of our review were to identify why selected executive branch agencies are using or not using the student loan repayment (SLR) program, how agencies are implementing the SLR program, and what results and suggestions agency officials could provide about the program and how they view the Office of Personnel Management’s (OPM) role in facilitating the program’s use. To address these objectives, we first reviewed and analyzed OPM’s annual reports to Congress on the SLR program to obtain governmentwide data on agencies’ use of the program and to help identify our case study agencies. We also consulted with an official at the Congressional Research Service (CRS) to discuss its research on the SLR program, and we reviewed CRS’s reports to Congress on student loan repayment for federal employees. We interviewed officials from the Partnership for Public Service, an organization with an objective of helping to recruit and retain excellence in the federal workforce, to hear its views on the program’s effectiveness governmentwide, and officials from GAO’s human capital office to get background information on program implementation. We then identified a set of federal agencies varying in size and mission that had established SLR programs, were in the process of establishing programs, or had chosen not to use them. We selected the Department of State (DOS), the Department of Justice (DOJ), and the Securities and Exchange Commission (SEC) as case study agencies because they were among the largest users of the SLR program in fiscal years 2003 and 2004, while the General Services Administration (GSA) and the Department of Energy (DOE) are users of the program but give fewer loan repayments on a case-by-case basis. We selected the Department of Transportation (DOT) and the Department of Commerce (Commerce) because they are large departments that were in the process of implementing SLR programs. Since we started our review, DOT has begun to make loan repayments. The Social Security Administration (SSA), the Equal Employment Opportunity Commission (EEOC), and the Small Business Administration (SBA) are among the larger agencies that have chosen not to use the program. The agency selection process was not designed to be representative of the use of the SLR program in the federal government as a whole, but rather to provide illustrative examples of why and how agencies decided to use the program or chose not to use it. We interviewed agency officials, such as human capital officers, SLR program managers, and recruitment directors, from the selected agencies, and obtained available documentation, such as strategic workforce plans, recruitment and retention worksheets, SLR implementation plans, and other documents associated with administering the program. In addition, we met with officials from OPM to gain a governmentwide perspective of agencies’ SLR programs and with officials from the Department of Education to discuss the department’s Direct Loan Program and its interaction with agencies making student loan repayments. After reviewing and analyzing agency responses, we used the supporting documents that some of the agencies provided to further develop our analysis of their use of the program. We did not observe or evaluate the operation of the agencies’ SLR programs. To assess the reliability of the number of employees receiving student loan repayments and SLR repayment cost data, we compared the OPM-reported data with data we received from the selected agencies. We determined the data were sufficiently reliable for the purposes of the report. Our review was conducted in accordance with generally accepted government auditing standards from July 2004 through June 2005. This appendix provides background information on our 10 case study agencies. These agencies varied in their mission and size. The agencies also face unique recruitment and retention challenges and have different strategies for addressing them. DOS is a cabinet-level federal agency responsible for U.S. foreign affairs and diplomatic initiatives with a mission of creating a more secure, democratic, and prosperous world for the benefit of the American people and the international community. Headquartered in Washington, D.C., DOS has 250 embassies and consulates worldwide with approximately 40,000 employees comprised of foreign service employees, civil service employees, and foreign service national employees. DOS’s recruitment goals include outreach to a broader segment of the U.S. population by increasing its presence at business and other professional schools. DOS also recruits top quality candidates with management skills and language skills in Arabic, Chinese, and other difficult languages. DOJ is a cabinet-level agency whose mission is to lead foreign and domestic counterterrorism efforts, enforce federal laws, provide legal advice to the President and to all other federal agencies, investigate federal crimes and prosecute violators, operate the federal prison system, and ensure the civil rights of all Americans. DOJ is headquartered in Washington, D.C., and has 61 unit agencies nationwide. The department has approximately 100,000 employees working in occupations such as security and protection, legal, compliance and enforcement, and information technology. Currently, DOJ’s hiring challenges relate to combating terrorism. The department places priority on hiring candidates with foreign language and intelligence analysis expertise and Federal Bureau of Investigation counterterrorism agents. DOJ is moving to develop and implement a departmentwide recruitment strategy that focuses on leveraging resources for common occupations, sharing “best practices” cases on the Internet, establishing relationships with targeted universities, and participating in job and career fairs. SEC’s mission is to protect investors; maintain fair, orderly, and efficient markets; and facilitate capital formation. The agency is headquartered in Washington, D.C., and has 11 regional and district offices. SEC has approximately 3,800 employees in occupations such as securities attorneys, accountants, and examiners. The agency has developed a formal, centralized recruiting program to coordinate its recruiting efforts for these occupations. The agency also recently created the SEC Business Associates Program to introduce business professionals to regulation of the securities markets and the work of the commission. Individuals with master’s degrees in business or other related fields can apply directly to the program. The program offers 2-year internships designed to provide on- the-job training for talented individuals, with eligibility for conversion to a permanent position. GSA’s mission is to help federal agencies better serve the public by offering, at best value, superior workplaces, expert solutions, acquisition services, and management policies. Headquartered in Washington, D.C., GSA has regional offices in 11 cities nationwide. The agency has over 12,000 employees working in information technology, accounting and budgeting, administrative and program management, and business and industry. Currently, GSA’s workforce is relatively stable, with an average separation rate of 5 to 6 percent. The agency hires an average of 900 employees annually. GSA seeks candidates who have strong customer service, acquisition, information technology, realty, financial management, and project management skills. DOE is a cabinet-level agency whose mission is to advance the national, economic, and energy security of the United States; promote scientific and technological innovation in support of that mission; and ensure the environmental cleanup of the national nuclear weapons complex. Headquartered in Washington, D.C., DOE has regional power administrations, laboratories, and technology centers nationwide. The department has approximately 15,000 employees who work in engineering, physical sciences, compliance and enforcement, and quality assurance. DOE’s recruiting efforts focus on information technology, foreign affairs, and intelligence, as well as areas such as physical sciences and project management. The department’s outreach efforts include participation in job and career fairs, partnerships with minority organizations, and distribution of position vacancy announcements to a variety of minority and advocacy organizations. DOT is a cabinet-level agency whose mission is to serve the United States by ensuring a fast, safe, efficient, accessible, and convenient transportation system that meets national interests and enhances the quality of life of the American people, today and into the future. The department is headquartered in Washington, D.C., and has offices nationwide. DOT has approximately 56,000 employees who work in various professional fields such as community planning and engineering. The department is focused on sustaining its current workforce numbers. DOT’s top priority will be to recruit air traffic controllers because roughly half of the number of current air traffic controllers could retire by 2012. In 2003, DOT created a Corporate Recruitment Workgroup that coordinates participation at various recruitment conferences and career fairs. The department has also addressed some of its entry-level hiring needs by developing a Career Residency Program, a 2-year program with a goal of broadening the search for talented transportation specialists, engineers, and information technology professionals. Commerce is a cabinet-level agency whose mission is to promote economic growth and security through export growth, sustainable economic development, and economic information and analysis. Headquartered in Washington, D.C., Commerce’s unit agencies, such as the National Oceanic and Atmospheric Administration, the Bureau of the Census, and the International Trade Administration, have offices nationwide. The department has more than 36,900 employees in a variety of professional fields. Commerce estimates it could lose one-fifth of its current workforce to retirement by 2007, and the department plans to focus its recruitment efforts on a variety of positions such as mathematical statisticians, chemists, patent examiners, and trade specialists. Commerce is developing comprehensive college outreach relations and partnerships to recruit entry-level workers and coordinate and partner with trade associations, professional societies, and alumni organizations to attract experienced applicants. SSA’s mission is to advance the economic security of the nation’s people through compassionate and vigilant leadership in shaping and managing America’s social security programs. Headquartered in Baltimore, Maryland, SSA has regional and field offices nationwide. The agency has approximately 65,000 employees in a variety of professional fields including the social sciences and information technology. Over the past several years, SSA has aggressively recruited between 3,000 to 4,000 employees, most at the entry level. SSA focuses recruiting efforts on positions providing direct service to the public, such as claims representatives as well as information technology professionals. SSA has created a National Recruitment Coordinator position to develop an agencywide recruitment strategy and marketing campaign that highlights the work and impact of the agency. The agency’s recruitment and marketing plan coordinates nationwide and on-campus recruitment. SSA has also recently launched a new campaign to attract veterans to the agency. EEOC’s mission is to ensure equality of opportunity by vigorously enforcing federal laws prohibiting employment discrimination through investigation, conciliation, litigation, coordination, adjudication, education, and technical assistance. The agency is headquartered in Washington, D.C., and has 51 field offices nationwide. EEOC has approximately 2,500 employees working in various positions such as attorneys, mediators, and investigators. On the basis of historical trends, EEOC will separate, due to expected retirements, at least 100 employees annually for the next few years. Depending on the amount of separation savings, EEOC may have the opportunity to backfill selected positions based on workload and other factors. In addition, EEOC recently announced plans to reorganize the agency by reducing levels of management, opening two new field offices, and strengthening the existing field offices. SBA’s mission is to maintain and strengthen the nation’s economy by aiding, counseling, assisting, and protecting the interests of small businesses, and by helping families and businesses recover from national disasters. Headquartered in Washington, D.C., SBA has regional offices nationwide. The agency has approximately 3,000 employees working in business analysis, contracting, and financial analysis. Currently, SBA recruitment is limited to replacing those who leave the agency. The Office of Human Resources centrally manages recruitment from headquarters and uses its recruitment Web site to communicate with prospective candidates. SBA recruitment and outreach efforts also involve using on-line newspapers to advertise work opportunities. Trina Lewis, Judith Kordahl, Kyle Adams, Jerome Brown, Sarah Jaggar, Ashutosh Joshi, Jessica Kemp, Matthew Myatt, and Tara Stephens also made key contributions to this report. | As federal workers retire in greater numbers, agencies will need to recruit and retain a new wave of talented individuals. Agencies need to determine if the federal student loan repayment (SLR) program is one of the best ways to make maximum use of available funds to attract and keep this key talent. GAO was asked to identify (1) why agencies use or are not using the program; (2) how agencies are implementing the SLR program; and (3) what results and suggestions agency officials could provide about the program and how they view the Office of Personnel Management's (OPM) role in facilitating its use. Ten agencies were selected to provide illustrative examples of why and how agencies decided to use or chose not to use the program. The largest users among GAO's 10 selected executive branch agencies primarily employed their SLR programs as broad-based retention tools aimed at keeping more recently hired employees with the knowledge and skills critical to their agencies. Officials at these agencies said the program also has an indirect positive effect on their recruitment efforts because job candidates are aware of the benefit and find the incentive attractive. Other agencies used the program as a recruitment and retention tool on a case-by-case basis, offering repayments to highly qualified individuals in occupations where the labor market is competitive. Agencies not using the program reported no real need to do so at this time because they are not facing significant recruitment and retention challenges. Agencies have a large degree of discretion in structuring their SLR programs, and they were tailoring program aspects to meet their unique needs. Those using their programs as broad-based retention tools operated them centrally, while those making loan repayments on a case-by-case basis had decentralized programs operated by their component units. Agencies also varied in the size of their loan repayments depending on the results they were trying to achieve. Although agencies believe it is a useful tool, officials described the program as time consuming and cumbersome to operate. They suggested that more automation and consolidation of program activities would make the program more efficient and easier to operate. Officials also suggested ways to make the program more effective. Since the SLR program is relatively new, agencies did not yet have comprehensive data to assess the program's impact, although they will need to establish a baseline of measures now for future assessments of the program. Currently, anecdotal evidence indicates that employees value the program, and agency officials believe the incentive will become more attractive to agencies once administrative problems are reduced. OPM has taken a number of steps to provide agencies with information and guidance on implementing the program. Human capital officials recognized OPM's efforts, but felt they could use more assistance on the technical aspects of operating the program, more coordination in sharing lessons learned in implementing it, and help consolidating some of the program processes. OPM and the Chief Human Capital Officers (CHCO) Council have an important role in assisting agencies with implementing their SLR programs. They may also be able to help agencies assess their own program results as well as develop a common set of metrics to provide information to Congress on the impact of the SLR program governmentwide. |
States’ increasing use of managed care for Medicaid beneficiaries needing long-term services and supports is a significant change from how states have historically met the needs of these vulnerable populations. While many states have extensive experience with using managed care programs to provide physical or behavioral health care services, states have not typically included beneficiaries needing long-term care services—especially seniors and adults with physical or developmental disabilities—in managed care programs. In 2004, only 8 states had implemented MLTSS programs. In contrast, as of May 2017, 27 states either had implemented MLTSS programs or were planning to implement them. (See fig. 1.) The most recent enrollment data available at the time of our study, from July 2015, showed that MLTSS programs in 18 states collectively served around 1 million Medicaid beneficiaries that year. Long-term services and supports include a broad range of health and health-related services and non-medical supports for individuals who may have limited ability to care for themselves because of physical, cognitive, or mental disabilities or conditions—and who need support over an extended period of time. Individuals needing long-term services and supports have varying degrees of difficulty performing activities of daily living, such as bathing, dressing, toileting, and eating, without assistance. They may also have difficulties with preparing meals, housekeeping, using the telephone, and managing money. Long-term services and supports to address these needs are generally provided in two settings: institutional facilities, such as nursing facilities and intermediate care facilities for individuals with intellectual disabilities; and home and community settings, such as individuals’ homes or assisted living facilities. HCBS cover a wide range of services and supports to help individuals remain in their homes or a community setting, such as personal care services to provide assistance with activities of daily living. MLTSS programs can vary due in part to the flexibility that Medicaid allows states in establishing their programs. For example, states have flexibility in determining which populations to include in their MLTSS programs and whether to use mandatory or voluntary enrollment. States also have flexibility in determining what services to include. In addition, states may choose to have MLTSS as part of a broader, comprehensive managed care program that also provides acute care or behavioral health care, or to have MLTSS as a separate managed care program. See table 1 for characteristics of MLTSS programs in the six states we selected for review. (App. I provides more information on the MLTSS programs in our selected states.) Within MLTSS programs, MCOs are responsible for coordinating the delivery of services to beneficiaries. To be eligible for MLTSS, beneficiaries must meet income and asset requirements, and also meet state-established criteria on the level of care needed, such as needing an institutional level of care. Once a person is determined eligible, the individual can be enrolled to receive MLTSS from an MCO. The MCO then works with the beneficiary to develop a service plan that addresses the beneficiary’s needs and preferences, including determining the type and amount of services the beneficiary needs. (See fig. 2.) For example, for a beneficiary receiving care in the home, the MCO determines if personal care services are needed and, if so, the amount of services, such as the number of hours needed per week. The MCO is then responsible for implementing this service plan and coordinating the beneficiary’s care. Although MCOs are responsible for coordinating MLTSS beneficiaries’ care, states remain responsible for the operation of MLTSS programs and must monitor the MCOs. State contracts establish MCO responsibilities with respect to the services the MCO is responsible for providing, the beneficiary protections that must be in place, and the information the MCO must report to the state. States then monitor MCO actions for compliance with contractual requirements. States may take compliance actions if they find that MCOs are not complying with contractual requirements and if they identify issues with MCOs’ provision of care. Compliance actions range in severity and can include informing MCOs of problems through letters or notices, issuing corrective action plans for the MCO to implement, or assessing intermediate sanctions. States are required to seek CMS approval for their MLTSS programs, which they can implement through several different authorities. Among the most commonly used authorities are section 1115 demonstrations and section 1915(b) waivers. Before approving an MLTSS program, CMS works with the state to shape the program design, including how the program will align with CMS guidance. In 2013, CMS issued guidance that set expectations for states seeking approval of MLTSS programs through section 1115 demonstrations or section 1915(b) waivers. In particular, CMS listed 10 key elements of effective MLTSS programs that the agency expects states to incorporate into both new and existing MLTSS programs. These elements address a range of topics, including qualified providers (or network adequacy), participant protections (including appeals and grievance processes and a critical incident management system with safeguards to prevent abuse, neglect, and exploitation), and quality (implementation of a comprehensive quality strategy for MLTSS). For example, states must ensure that MCOs maintain a network of qualified long-term services and supports providers that is sufficient to provide adequate access to covered services; establish safeguards to ensure beneficiary health and welfare; and develop mandatory MCO reports on MLTSS quality of care performance measures, analyze those reports, and take corrective actions if needed. CMS’s guidance noted that if a state incorporated these 10 elements it would increase the likelihood of having high-quality MLTSS programs. CMS uses these elements to review and approve states’ MLTSS programs. When CMS approves an MLTSS program under a section 1115 demonstration or section 1915 waiver, it establishes state-specific requirements for the program and also specifies how it will oversee the program on an ongoing basis. For example, CMS may require a state to conduct specific MCO monitoring activities. In addition, CMS may require a state to submit quarterly and annual performance reports to CMS. These reports may address state-specific measures of quality and access, including information on appeals and grievances. Within CMS, oversight of MLTSS programs is a joint responsibility of the agency’s central and regional offices. In addition to state-specific requirements, states with MLTSS programs are also subject to broader quality requirements that apply to all Medicaid managed care programs. For example, states must have an external quality review process to assess the quality of care MCOs provide to all managed care beneficiaries, including MLTSS beneficiaries. States may use an external quality review organization (EQRO)—an independent organization specializing in external quality reviews—to conduct several required external quality review activities, and must use an EQRO for an annual quality review. States must also have a quality strategy for MLTSS programs that includes, for example, a discussion of performance measures, performance improvement projects, and state quality oversight plans. Changes to requirements for states regarding Medicaid managed care quality are slated to take effect in July 2017 or later, under CMS’s 2016 Medicaid managed care final rule, which was the first major change to Medicaid managed care regulations since 2003. The beneficiary appeals and grievance processes are important beneficiary protections for MLTSS programs. By law, MCOs must have an internal appeals process in place so that MLTSS beneficiaries may challenge certain MCO actions, such as decisions to terminate services, as well as a process for MLTSS beneficiaries to file a grievance with the MCO regarding their care. Appeals. A beneficiary can file an appeal in response to an MCO’s decision to, among other things, reduce services, terminate services, or deny payment for services. For example, a beneficiary could appeal an MCO’s decision to deny coverage for a specific type of MLTSS care, such as personal care services, or to reduce the number of personal care attendant hours a beneficiary will receive. After the beneficiary submits an appeal, the MCO will either approve the appeal (meaning that the MCO, through its internal appeals process, overturns its original decision and resolves the appeal in favor of the beneficiary), or deny the appeal (meaning that the MCO upholds its original decision). If an MCO denies the appeal, the beneficiary can request that the state review the MCO’s decision through the state’s fair hearing process, in which state officials rule on whether the MCO’s decision should be upheld. Grievances. A beneficiary can file a grievance with an MCO to express dissatisfaction about any matter not covered by appeals. For example, a beneficiary could file a grievance about difficulty getting an appointment with an MLTSS provider, concerns about the quality of MLTSS care, a provider or MCO not respecting a beneficiary’s rights, or a provider not treating the beneficiary respectfully. Beneficiaries may also submit grievances directly to the state, in a manner determined by the state, such as to the state Medicaid agency or state long-term care ombudsman. After receiving information about the beneficiary’s grievance, the MCO conducts an independent review and determines what, if any, steps are needed to resolve the grievance. Appeals and grievances processes are slated to change, beginning in July 2017, due to changes specified in CMS’s May 2016 Medicaid managed care final rule. For example, there is a new requirement that MCOs maintain records about each grievance or appeal, including a general description of the reason for the appeal or grievance, the date received and reviewed, and the resolution at each level of the grievance or appeal. MCOs must maintain these records in a manner accessible to the state and provide them to CMS upon request.Previously, states have been required to maintain information on appeals and grievances, and the final rule specified what those records must include. The six states we reviewed used a range of methods to oversee MLTSS beneficiaries’ access to and quality of care. States’ oversight methods included implementing external quality reviews, tracking performance measures, surveying beneficiaries, and reviewing medical charts, among other activities. In some cases, these oversight methods were specific to MLTSS programs, while in other cases the methods addressed MLTSS as well as other state managed care programs. Examples of state oversight methods included the following: External quality reviews: All six states implemented the external quality reviews that CMS requires, which involves assessments of MCOs’ compliance with requirements related to quality, and validating MCO performance measures and performance improvement projects. In each of these states, the state’s EQRO assessed MCO compliance with quality requirements and reported to the state on their findings. Examples of EQROs’ findings included: The Texas EQRO 2014 report found weaknesses in the state’s performance measures on effectiveness of care and made recommendations to the state to improve the care provided through the state program that provides both MLTSS and acute care for elderly beneficiaries. These included steps to improve performance on measures such as the rates of potentially preventable hospital admissions and emergency department visits. The Delaware EQRO assessed aspects of quality and access across the two MCOs that operated both MLTSS and non-MLTSS services. The EQRO’s 2014 report to the state reported, for example, that both plans were compliant with Medicaid managed care regulations regarding quality assessment and performance improvement, but that they could improve in managing the grievance and appeals process, and ensuring appropriate resolution and communication with beneficiaries and providers. In addition to required EQRO reviews, five of the six states reported that they had their EQROs conduct other quality oversight activities. For example, Delaware’s EQRO took part in a task force that provides a forum for sharing best practices, and identifies and implements quality improvement strategies. Tennessee contracted with its EQRO to prepare an annual report on national initiatives that may affect managed care, and conduct educational meetings for state quality staff and MCOs. Use of MCO performance measures and beneficiary surveys: All six states tracked performance measures, which varied by state, but included measures such as rates of hospitalization, timely MCO response to beneficiary grievances, and the proportion of beneficiaries receiving certain services. For example, Texas tracked the proportion of grievances that were resolved within certain time frames, and Kansas tracked the proportion of beneficiaries receiving HCBS care who received a flu vaccine. The states also used beneficiary surveys to help monitor MLTSS care. For example, one state’s survey asked beneficiaries about their satisfaction with and ability to access services. States generally used surveys that were designed by the state or by their EQRO. The states used established surveys, or incorporated questions from established surveys, such as the National Core Indicators–Aging and Disability survey and the Consumer Assessment of Healthcare Providers & Systems in their surveys. Reviews of beneficiary information such as medical charts or case files: Five of the six states reported that they had efforts to review or audit MLTSS beneficiary information, such as medical charts, case files, or other information, to identify potential issues with MLTSS care. The frequency of their efforts ranged from quarterly to once every 3 years. For example, Arizona conducted medical chart reviews at least every 3 years, reviewing a sample of charts for MCO compliance with case management requirements in areas such as timeliness, assessments of care, and the services provided to beneficiaries. Delaware conducted quarterly on-site reviews, which included reviews of beneficiaries’ case files, level of care assessments, and each MCO’s critical incident management system, to ensure that beneficiaries were receiving necessary services and that MCOs were complying with requirements regarding MLTSS care. Reviews of provider networks: Officials in all six states reported conducting their own assessments of MLTSS provider networks or requiring MCOs to report on their MLTSS provider networks. Kansas, for example, conducted provider network adequacy assessments and annual audits about access. Minnesota, every 2 years, surveys geographic areas to identify provider gaps, and assesses provider networks and providers’ ability to deliver services; it shares information on any identified provider gaps with its MCOs. Arizona required MCOs to submit an annual plan about provider network development, including information on any network gaps, and to report any changes in networks which would affect more than five percent of beneficiaries within one geographic service area. Stakeholder meetings: Officials in all six states told us that they met with stakeholders, such as state long-term care ombudsmen, beneficiary advocates, or providers, on a regular basis to discuss beneficiaries’ experiences with MLTSS care. The six states we reviewed varied in the extent to which—and how—they used appeals and grievance data to monitor beneficiaries’ concerns about quality and access in their MLTSS programs. We found variation, for example, in the extent to which states were collecting and using data on appeals and grievances specifically related to MLTSS care, calculating appeals and grievance rates, and monitoring the outcomes of beneficiaries’ appeals. Collecting and using MLTSS-specific data: Two of the six states– Arizona and Texas—did not separate MLTSS appeals and grievances from those related to other managed care services or beneficiaries. In these two states, MCOs that provide MLTSS also provide non-MLTSS services, such as acute care. While both states collected and used data on managed care appeals and grievances, they did not require MCOs to report MLTSS appeals and grievances separately from those for other managed care services and beneficiaries, or in a way that allowed the states to identify all MLTSS-specific appeals and grievances. In the other four states—Delaware, Kansas, Minnesota, and Tennessee—the MCOs reported MLTSS appeals and grievances separately from appeals and grievances related to other managed care services and beneficiaries. Within these four states, monitoring practices varied. Officials in one of these four states, for example, reviewed monthly reports on MLTSS appeals. They said appeals data helped them understand what was happening with beneficiaries on a regular basis, identify any systemic patterns in appeals, and take action if needed. They also noted that, as one way of measuring access to care, they review appeals and grievance data for any beneficiary complaints about not having access to providers. In Kansas, officials said that they regularly reviewed appeals and grievances separately for all beneficiaries receiving HCBS; they reviewed appeals and grievances for beneficiaries receiving MLTSS care in a nursing facility as part of their review of the state’s broader managed care population. Calculating appeals and grievances rates: Three states—Kansas, Minnesota, and Tennessee—calculated rates of MLTSS appeals and grievances as a proportion of beneficiary enrollment, so that they could track patterns or changes in appeals and grievances independent of changes in enrollment, while one state, Delaware, calculated a rate of grievances as a proportion of beneficiary enrollment but did not calculate a rate of appeals. Officials in one of these states told us that calculating rates—rather than by looking only at the numbers of appeals and grievances—allowed more meaningful comparisons of appeals and grievances across MCOs. Officials in this state provided an example of when the state took an action based on appeals rates. The state identified that one MCO had a significantly higher appeals rate than other MCOs, and as a result, put a temporary moratorium on the MCO’s implementation of reductions in or terminations of certain services. The state examined the reasons for the high appeals rate—which involved the MCO’s process for managing beneficiaries’ use of services—and lifted the moratorium after the MCO addressed the issues. After the state lifted the moratorium, the MCO’s appeal rate dropped to a rate similar to that of the other two MCOs. The remaining two states, Arizona and Texas, did not calculate rates of appeals and grievances based on beneficiary enrollment. We analyzed grievance rates in one state and found that one MCO— identified as MCO B in figure 3—consistently had a lower number of grievances than other MCOs in the state. However, when grievances were calculated as a proportion of enrollees, MCO B—which had fewer enrollees than other MCOs—had a higher grievance rate than most other MCOs. See figure 3 for an illustration of the difference in grievance numbers and grievance rates for two of the MCOs in this state. Using categories of appeals and grievances: The six states varied in the extent to which—and how—they used categories of appeals or grievances to identify beneficiary concerns about specific types of services or access to care issues. States can request that MCOs report beneficiary appeals and grievances in categories based on the type of beneficiary concern. For example, a beneficiary appeal about a reduction in private duty nursing service hours could be categorized as being related to that particular type of service, and a grievance about late transportation services that caused the beneficiary to miss an appointment could be categorized as being related to transportation services. State officials told us that using categories can help them identify patterns or changes in appeals and grievances, and highlight areas where the state could take action to address beneficiary concerns. All states required MCOs to report categories of grievances and four states—Arizona, Kansas, Minnesota, and Texas—required MCOs to report categories of appeals. In the two remaining states, each state was able to review appeals decisions directly and so did not rely on MCOs to categorize appeals. Examples of Appeals and Grievances Categories Minnesota had managed care organizations (MCO) categorize appeals and grievances by setting of care, type of service, and the type of issue the beneficiary raised. For example, regarding the types of issues MCOs could report: Appeals categories included services and benefits; failure to provide services within contractual timelines; and billing and financial issues, among others. coordination of care; and technical competence and appropriateness, among others. quality of care, quality of service, and case management. State officials said they regularly review MCOs’ grievance data and evaluate the grievance categories, working to refine the categories to make them as useful as possible. For example, they evaluate MCOs’ explanations for grievances they categorized as an “other” type of grievance (as opposed to a specific category), in order to identify new types of beneficiary concerns. Arizona used several categories of grievances, such as access to care, medical services provision, and transportation. State officials provided an example of how they adjusted categories to reflect emerging areas of concern. They explained that transportation services, which enable MLTSS beneficiaries and other beneficiaries to access care, had the highest number of grievances. As a result, the state required MCOs to work more closely with transportation providers. In addition, the state refined its grievance categories to better track specific types of transportation concerns, such as the timeliness of pick up, unsafe driving, and missed or late appointments. Monitoring appeals outcomes: The six states varied in the extent to which they monitored whether the appeals that MLTSS beneficiaries filed were ultimately approved or denied by MCOs—that is, whether MCOs reversed their initial decisions to reduce or terminate services or to deny coverage for MLTSS care. Officials from one state said that data on appeals outcomes, particularly decisions where the MCO reversed its initial decision, are as important as the data on the appeals themselves. They noted that if MCOs often reverse their decisions, it indicates a problem with beneficiaries being put through appeals unnecessarily. Four states—Delaware, Kansas, Minnesota, and Tennessee—monitored the outcomes of MLTSS appeals. Arizona monitored the outcomes of appeals for its managed care programs generally, though its appeals outcome data did not distinguish all MLTSS-related appeals from other types of appeals. Finally, one state—Texas—had not previously required MCOs to report information about appeals outcomes, but began requiring MCOs to do so during the course of this study, starting in September 2016. Two of the six states’ Medicaid agencies—in Delaware and Tennessee— were actively involved in determining appeals outcomes. In Delaware, nursing staff with the state Medicaid agency reviewed each appeal and represented the state as a voting member on MCO panels for appeals decisions. In Tennessee, the state directly receives and processes all appeals and shares them with the MCO, which then reconsiders its original decision. If the MCO upholds its decision, the state completes its own review and determines whether to uphold or overturn the MCO’s decision. Officials from both states said state involvement helped the state identify trends in appeals and address issues, and Delaware officials believed that their involvement was facilitated by the relatively small size of the state. In the remaining four states—Arizona, Kansas, Minnesota, and Texas—appeals outcomes were decided by MCOs without state involvement, though beneficiaries in all states had the right to request a state fair hearing, which could overturn the MCO’s decision. States varied in the extent to which appeals resulted in MCOs’ decisions being upheld or reversed. In the two states where the state Medicaid agency was actively involved in the appeals process, a greater share of beneficiary appeals were resolved in favor of the beneficiary—in other words, a greater share of MCOs’ initial decisions were overturned—than in the other states. Other factors, such as the type of services being appealed, or the beneficiary populations included in the appeals data, may also affect the rate of appeals approved. (See fig. 4.) All six states reported taking compliance actions against MCOs in response to issues they identified that affected MLTSS beneficiaries, though to varying degrees. States identified issues through their MCO monitoring efforts and other means. States took various actions to resolve those issues, ranging from warning letters or notices to MCOs to financial penalties. For example, in Delaware, the state Medicaid agency issued a formal notice to an MCO about deficiencies the state identified in its quarterly reviews of beneficiaries’ medical charts. Delaware found deficiencies with respect to beneficiary contact with behavioral health providers, and difficulty in scheduling timely coordination of care meetings. Arizona assessed financial penalties in response to an MCO’s failure to coordinate medically necessary transportation. The state identified the issue through hundreds of beneficiary grievances related to transportation services, which the state tracked to a transportation provider that served MLTSS and other beneficiaries. The prevalence of compliance actions varied across our selected states; some states, for example, reported over 20 instances in which they required MCOs to submit corrective action plans to address issues that affected MLTSS beneficiaries, while other states reported using few corrective action plans from 2013 through 2015. CMS generally depends on quarterly and annual reporting requirements as stipulated in states’ special terms and conditions as a framework to monitor access and quality in their MLTSS programs. CMS’s reporting requirements are customized for each state, and as such, the content and specificity of reports can vary by state. CMS officials told us that as state reports are received, the central and regional office staff reviews them for compliance with federal regulations and the state’s particular reporting requirements. Agency officials explained that after reviewing the state reports, regional office staff can contact state Medicaid officials as necessary with questions or concerns. CMS officials indicated that all six of our selected states were compliant with their reporting requirements, and that the agency did not request additional reports from the states from 2013 through 2015. Also, all of our selected states were required to have meetings with CMS at varying intervals, depending on the state. The frequency of these meetings was determined when CMS approved the states’ special terms and conditions, and ranged from bimonthly to quarterly. While CMS has specified certain parameters for state oversight of MLTSS, the agency did not always require the six selected states to report the information needed to monitor this oversight. CMS’s 2013 guidance for MLTSS programs highlights the 10 elements that it deems essential for developing and maintaining high-quality programs, which CMS uses when reviewing or approving state MLTSS programs. In particular, this guidance establishes key elements to ensure access and quality, including qualified providers (which includes an adequate network of qualified providers), participant protections (which includes appeals and grievance processes and reporting of critical incidents), and quality. Further, CMS’s guidance says that states should provide reports to CMS to demonstrate their oversight of these elements. In addition, federal internal control standards stipulate that agencies conduct monitoring and evaluation activities. In our review of the reporting required of our selected states, however, we found that CMS did not require all states to report on certain areas related to those key elements—namely network adequacy, that is, the sufficiency in the number and types of long-term care providers serving beneficiaries in the managed care plans; critical incidents, which are events or situations that cause or may cause harm to a beneficiary’s health or welfare, such as abuse, neglect, or exploitation; and appeals and grievances. As a result, we found cases where state reporting did not allow CMS to assess state adherence with federal guidance and oversight of MLTSS access and quality. Network adequacy. CMS did not require three of our six selected states—Arizona, Minnesota, and Tennessee—to regularly report information on network adequacy, but it did require Delaware, Kansas, and Texas to report such information. As part of states’ oversight responsibilities of MCOs, CMS requires states to ensure that MCOs maintain a network of providers that is sufficient to provide adequate access to all covered services, and includes network adequacy as 1 of the 10 elements it uses to review, approve and renew MLTSS waivers. CMS regulations direct MCOs to submit assurances of network adequacy to the state. However, CMS currently does not require that states report this information to the agency unless it is stipulated in the state’s reporting requirements, or if CMS requests it. CMS officials said that the agency can request network adequacy information from the states, even though it may not be part of the reporting requirements in the states’ special terms and conditions. Given that in recent years CMS has not requested that any of our selected states provide additional information, including network adequacy assurances, the agency may miss potential network adequacy issues in states where there are no specific reporting requirements. Without ongoing monitoring of network adequacy, CMS may not be able identify when enrollment or other trends begin to erode beneficiary access to MLTSS. Critical incidents reports. CMS required three of our six states— Delaware, Kansas, and Minnesota—to submit analyses or summaries of their MCOs’ critical incidents reports, but did not require the other three states—Arizona, Tennessee, and Texas—to do so. Even though Delaware was required to submit information on critical incidents, in our review of two of the state’s 2015 quarterly reports, we did not find summaries or data on critical incidents. In addition, Delaware’s annual report did not provide any information on critical incidents in the state, but described how the state collects and tracks critical incidents and their outcomes on a monthly basis. This gap in Delaware’s reporting, and the lack of a requirement to report in Arizona, Tennessee, and Texas, means that CMS cannot directly monitor the degree to which critical incidents are occurring in these states or how the states are tracking and resolving incidents that involve reports of abuse, neglect, or exploitation of vulnerable beneficiaries. Appeals and grievances. CMS required all states to report information on complaints or problems reported by consumers, of which appeals and grievances are an important part. However, the level of detail CMS required from each state varied. For example, CMS’s reporting requirements for Delaware, Kansas, and Minnesota specifically included a request for MCO appeals and grievance reports with outcomes or overturn rates, which represent the extent that MCOs reverse their decisions to deny certain services, and which can indicate potential access problems. However, for the other states, Arizona, Texas, and Tennessee, CMS only requires that they report a summary of the types of complaints or grievances that consumers identified about the program in a quarter, including any trends, resolutions of complaints or grievances, and any actions taken or planned to prevent other occurrences. In addition, CMS included language in Texas that required the state to report on appeals, but not necessarily appeals outcomes. A lack of specificity in the reporting requirements may result in CMS not receiving necessary information on beneficiary appeals and grievances. For example, CMS’s use of such a broad reporting requirement yielded the following reporting responses from the three states: Arizona provided appeals and grievance summaries for two specific programs, but not for the MLTSS population as whole. CMS officials acknowledged that the grievance and appeals data included in Arizona’s quarterly and annual reports were only for those two programs, which aligned with reporting requirements in the state’s special terms and conditions. CMS officials told us that they can request additional reports from states at any time, but they had not done so. Texas did not require its MCOs to report appeals outcomes as of April 2016. However, Texas officials indicated that as of September 2016, they began to require MCOs to report appeals outcomes. Tennessee provided appeals data including appeals outcomes in its quarterly report. As noted earlier, a number of selected states examined MLTSS- related appeals and grievance data—including the rates and categories of appeals and grievances by managed care plans, as well as appeals outcomes—to identify potential areas for greater MCO oversight. Even though the rates of appeals or grievances were available in four of our selected states, CMS did not require any of the states to report them. Furthermore, without requiring states to report readily available information on the rates of appeals and grievances and appeals outcomes, CMS may not be able to identify trends in consumer complaints and denied appeals in a timely manner, and may not be able to identify MCOs that may be inappropriately reducing or denying services. Example of One State’s Reporting Requirements on Events That May Affect Access to Care Events occurring during the quarter or anticipated to occur in the near future that affect health care delivery, including but not limited to: systems and reporting issues, approval and contracting with new plans; benefits; enrollment; grievances; proposed or implemented changes; quality of care; changes in provider qualification standards; access; proposed changes to payment rates; health plan financial performance and the implementation of managed long-term services and supports, that is relevant to the demonstration; pertinent legislative activity; and other operational issues. We also found cases where CMS’s reporting requirements lacked detail, which may have limited the usefulness of the information states provided in certain sections of their reports. Although CMS required all of our selected states to report on “events that may affect access to care” (see sidebar), as well as quality assurance efforts, the requirements were broadly written, and as such, they may not garner the information needed for CMS to monitor access and quality. For example, CMS used the same, or similar, statement to indicate that all states should report on quality assurance efforts: “Identify any quality assurance and monitoring activities in the quarter.” In response to this, we found that four states reported general descriptions of their planned and ongoing quality assurance activities for MLTSS or their comprehensive managed care programs as a whole, and often repeated the same or similar information in subsequent quarterly reports. For example, in Minnesota’s quarterly reports, the state provided little information about its quality assurance efforts other than a description of how the state has a team that meets twice a year to review and analyze performance measure and remediation data. Furthermore, the same information is repeated in multiple quarterly reports. The Centers for Medicare & Medicaid Services’ (CMS) Onsite Review of KanCare. In response to hundreds of complaints from beneficiaries, providers, and advocates voiced directly to CMS between late 2015 and mid- 2016, in October 2016, the agency conducted a detailed, on-site review of KanCare, Kansas’s comprehensive managed care program that includes managed long-term services and supports (MLTSS). For this review, CMS requested documentation from the state beyond what the state is required to report—such as managed care organization (MCO) oversight policies and procedures. The agency also reviewed information on specific complaints, and met with state officials in multiple state agencies to discuss overarching concerns and to remediate individual complaints. As a result of this review, CMS found systemic, longstanding program deficiencies in Kansas’s state oversight that it had not previously identified from the information obtained through the state’s required reporting. Specifically, CMS found that the Kansas state agency was substantively out of compliance with federal statutes and regulations as well as with its approved state plan, and that this noncompliance “placed the health, welfare, and safety of KanCare beneficiaries at risk and required immediate action.” CMS also found that Kansas’s state agency’s oversight of its MCOs had diminished since the beginning of its operation, that it did not seem to be analyzing access to care reports, and that it did not have a comprehensive system for reporting and tracking critical incidents, among other issues. As of July 2017, Kansas was implementing a corrective action plan to address these issues. external quality review, CMS does not have one consistent approach for monitoring MLTSS programs. Instead, CMS customizes its monitoring of MLTSS to each state’s program to accommodate the variability among MLTSS programs. The customized approach to monitoring is reflected in the quarterly and annual reporting requirements in the program’s special terms and conditions. When asked about differences in content and specificity in reporting requirements for the same elements across states, agency officials said that these differences could be partly due to changes in the staff who write the reporting requirements. They also said that terminology of requirements may evolve as state programs age with later versions, reflecting more refined language. Also, states with more recently approved programs may have requirements that reflect lessons CMS staff has learned about the programs. However, any gaps in reporting requirements, and gaps in state reporting from what CMS has required, may mean that CMS does not always have the data to monitor key aspects of MLTSS access and quality among selected states and may be unable to reliably detect state or MCO practices that do not meet CMS’s guidance. See sidebar for an example of how oversight of access and quality is diminished when CMS does not obtain necessary information. The new 2016 managed care final rule will require states to report annually on their managed care programs, beginning one year following the release of new CMS guidance. The managed care rule specifies that annual reports must include, among other things: appeals, grievances, and state fair hearings; access and availability of services; MCO performance on quality measures; and results of any corrective action plans, sanctions, or other actions taken by the states. At the time of our review, the specific requirements were not yet known, including whether states would need to address MLTSS programs separately from managed care programs for acute care services, which have different networks of providers. As of July 2017, HHS had not yet issued guidance clarifying the format of the annual reports. Using managed care to deliver long-term services and supports offers states an opportunity to allow Medicaid beneficiaries with significant health needs to live and receive care in the setting of their choice, expand access to home and community-based care, and provide such care at a potentially lower cost than institutional care. Although states’ increasing use of MLTSS can yield benefits for improved access to quality care, it also heightens the importance of federal and state oversight, which is critical to ensure that the potentially vulnerable populations served by these programs—such as the elderly and adults with physical or developmental disabilities—are able to obtain the care they need, when they need it. States rely on MCOs to coordinate MLTSS care, but remain responsible for monitoring beneficiaries’ access to and quality of care. Along with the states, CMS plays an important role in establishing requirements for MLTSS programs and overseeing states’ programs. To monitor MLTSS programs, CMS relies in large part on states’ reports on different aspects of their programs. CMS’s reporting requirements are critical to CMS’s oversight because they establish the foundation for the information CMS will receive about MLTSS programs and the beneficiaries they serve. However, on the basis of our review, CMS’s requirements for state reporting do not always include key elements necessary for the agency to monitor certain key aspects of MLTSS beneficiaries’ access and quality of care, including data related to appeals and grievances, network adequacy, and critical incident tracking. As a result, these requirements do not ensure CMS has information for all of the key areas identified in its 2013 guidance for MLTSS. Without state reporting requirements that provide CMS with necessary information on MLTSS programs, CMS’s ability to monitor programs, identify potential problems, and take action as needed, may be limited. To improve CMS’s oversight of states’ MLTSS programs, we recommend that the Administrator of CMS take steps to identify and obtain key information needed to oversee states’ efforts to monitor beneficiary access to quality services, including, at a minimum, obtaining information specific to network adequacy, critical incidents, and appeals and grievances. We provided a draft of this report to HHS for comment. In its comments, which are reprinted in appendix II, HHS concurred with our recommendation and described certain of its efforts to address it. HHS also stated that it is in the process of reviewing its May 2016 Medicaid managed care regulations in order to prioritize beneficiary outcomes and state priorities, and will take our recommendation into consideration as part of that review. HHS stated that it takes seriously its effort to oversee access and quality in MLTSS programs and that it shares responsibility with states to protect beneficiaries. HHS also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of CMS, the Administrator of the Administration for Community Living, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or at iritanik@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix III. Our six selected states—Arizona, Delaware, Kansas, Minnesota, Tennessee, and Texas—have managed long-term services and supports (MLTSS) programs that varied across a number of characteristics, such as program start year, cost, and enrollment. For example, the MLTSS programs in Delaware and Kansas both began within the last five years, while the MLTSS program in Arizona began over 25 years ago. In addition, in 2015, total capitated payments to managed care organizations (MCO) for MLTSS, as reported by the six states, ranged from $438.9 million in Delaware to $3.6 billion in Texas. Also, the number of MLTSS beneficiaries reported by the states ranged from 6,340 beneficiaries in Delaware to almost 98,000 beneficiaries in Texas. (See table 2.) The number of beneficiaries in some programs has changed in recent years. For example, between 2013 and 2015, Texas increased the number of MLTSS beneficiaries by over 145 percent, after the state expanded its community-based MLTSS program to rural areas in 2014 and began including beneficiaries receiving nursing facility care in the program in 2015. In addition to the contact named above, Susan Barnidge and Leslie V. Gordon (Assistant Directors), Shamonda Braithwaite, Robin Burke, Caroline Hale, Corissa Kiyan-Fukumoto, and Laurie Pachter made key contributions to this report. Also contributing were Vikki Porter and Emily Wilson. | Twenty-two states use MLTSS programs to provide care for Medicaid beneficiaries who need long-term support. Using managed care to deliver long-term services and supports can be a strategy for states to expand home- and community-based care, which many beneficiaries prefer, and to lower costs. However, given the potential vulnerability and needs of beneficiaries in these programs, oversight is crucial to ensure their access to quality care. GAO was asked to review states' implementation and CMS's oversight of MLTSS programs. In this report, GAO (1) described how selected states monitored MLTSS access and quality, and (2) examined the extent to which CMS oversees MLTSS access and quality in selected states. GAO reviewed federal regulations, guidance, and internal control standards. For six states selected for variation in location, program size and duration, and other factors, GAO reviewed reporting requirements, reports to CMS, and other documents. GAO also reviewed data from these states on beneficiary appeals and grievances from 2013 through 2015—the most recent data available—and interviewed state and CMS officials. In Medicaid, long-term services and supports are designed to promote the ability of beneficiaries with physical, cognitive, or mental disabilities or conditions to live or work in the setting of their choice, which can be in home or community settings, or in an institution such as a nursing facility. States are increasingly delivering such services through managed care, known as managed long-term services and supports (MLTSS). In MLTSS, as with most Medicaid managed care programs, states contract with managed care organizations (MCO) to provide a specific set of covered services to beneficiaries in return for one fixed periodic payment per beneficiary. In addition, beneficiaries have the right to appeal an MCO decision to reduce, terminate, or deny their benefits, or file a grievance with an MCO regarding concerns about their care. The six states GAO reviewed—Arizona, Delaware, Kansas, Minnesota, Tennessee, and Texas—used a range of methods for monitoring access and quality in MLTSS programs. To oversee beneficiaries' care, GAO found that states used—to varying levels—external quality reviews, beneficiary surveys, stakeholder meetings, and beneficiary appeals and grievances data. For example, while all six states used external quality reviews and beneficiary surveys, GAO found that states varied in the extent to which—and how—they used appeals and grievances data to monitor beneficiaries' concerns about quality and access in their MLTSS programs. The Centers for Medicare & Medicaid Services (CMS)—the federal agency responsible for overseeing Medicaid—did not always require the six selected states to report the information needed to monitor access and quality in MLTSS programs. CMS primarily relied on its reviews of state-submitted reports to monitor MLTSS programs for compliance with federal regulations and state-specific reporting requirements, and what states are required to report to CMS can vary by state. Although CMS highlighted certain elements that it deemed essential to developing and maintaining high quality MLTSS programs in its 2013 guidance, GAO found that CMS did not require all selected states to report on these elements—namely, provider network adequacy; critical incidents, which are events that may cause abuse, neglect or exploitation of beneficiaries; and appeals and grievances. CMS did not require three of the six states that GAO reviewed to regularly report on network adequacy or provide summaries of critical incidents. Further, although CMS requires all selected states to report on their quality assurance efforts, GAO found that states often report general descriptions of their planned and ongoing quality assurance activities for MLTSS or their entire comprehensive managed care programs. Consequently, state reporting did not always provide CMS with information needed to assess state oversight of key elements. Gaps in reporting requirements may mean that CMS does not always have information needed to monitor key aspects of MLTSS access and quality among selected states and it may not be able to reliably detect state or MCO practices that do not meet CMS's guidance. GAO recommends that CMS take steps to identify and obtain information to oversee key aspects of MLTSS access and quality, including network adequacy, critical incidents, and appeals and grievances. HHS concurred with GAO's recommendation. |
Under the Rehabilitation Act, a person is considered to have a disability if the individual has a physical or mental impairment that substantially limits one or more major life activities. Existing federal efforts are intended to promote the employment of individuals with disabilities in the federal workforce and help agencies carry out their responsibilities under the Rehabilitation Act. For example, federal statutes and regulations provide special hiring authorities for people with disabilities. These include Schedule A excepted service hiring authority—which permits the noncompetitive appointment of qualified individuals with intellectual, severe physical, or psychiatric disabilities without posting and publicizing the position—and appointments and noncompetitive conversion for veterans who are 30 percent or more disabled. To qualify for a Schedule A appointment, an applicant must generally provide proof of disability and a certification of job readiness. Proof of disability can come from a number of sources, including a licensed medical professional, or a state agency that issues or provides disability benefits. The proof of disability document does not need to detail the applicant’s medical history or need for an accommodation. Executive Order 13548 committed the federal government to many of the goals of an executive order issued a decade earlier, but went further by requiring federal agencies to take certain actions. For example, Executive Order 13548 requires federal agencies to develop plans for hiring and retaining employees with disabilities and to designate a senior-level official to be accountable for meeting the goals of the order and to develop and implement the agency’s plan. In addition, OPM and Labor have oversight responsibilities to ensure the successful implementation of the executive order (see table 1). For the purposes of determining agency progress in the employment of people with disabilities and setting targeted goals, the federal government tracks the number of individuals with disabilities in the workforce through OPM’s Standard Form 256, Self-Disclosure of Disability (SF-256). Federal employees voluntarily submit this form to disclose that they have a disability, as defined by the Rehabilitation Act. For reporting purposes, disabilities are separated into two major categories: Targeted and Other Disabilities. Targeted disabilities, generally considered to be more severe, include such conditions as total deafness, complete paralysis, and psychiatric disabilities. Other disabilities include such conditions as partial hearing or vision loss, gastrointestinal disorders, and learning disabilities. Further, Labor is given responsibilities in the executive order to improve efforts to help employees who sustain work-related injuries and illnesses return to work. In July 2010, the Protecting Our Workers and Ensuring Reemployment (POWER) Initiative was established, led by Labor. This initiative aims to improve agency return-to-work outcomes by setting performance targets, collecting and analyzing injury and illness data, and prioritizing safety and health management programs that have proven effective in the past. 5 U.S.C. §8101, et seq. Workers’ Compensation Programs (OWCP) reviews FECA claims and makes decisions on eligibility and payments. We have completed a number of reviews that have identified steps that agencies could take to provide equal employment opportunity to qualified individuals with disabilities in the federal workforce. In July 2010, we held a forum that identified barriers to the federal employment of people with disabilities and leading practices to overcome these barriers. Participants said that the most significant barrier keeping people with disabilities from the workplace is attitudinal and identified eight leading practices that agencies could implement to help the federal government become a model employer: (1) top leadership commitment; (2) accountability, including goals to help guide and sustain efforts; (3) regular surveying of the workforce on disability issues; (4) better coordination within and across agencies; (5) training for staff at all levels to disseminate leading practices (6) career development opportunities inclusive of people with (7) a flexible work environment; and (8) centralized funding at the agency level for reasonable accommodations. GAO, Highlights of a Forum: Participant-Identified Leading Practices that Could Increase the Employment of Individuals with Disabilities in the Federal Workforce, GAO-11-81SP (Washington, D.C.: Oct. 5, 2010). OPM, in consultation with EEOC, OMB, and Labor, issued a memorandum in November 2010 to heads of executive departments and agencies outlining the key requirements of the executive order and what elements must be included in agency disability hiring plans. These elements include listing the name of the senior-level official to be held accountable for meeting the goals of the executive order and describing how the agency will hire individuals with disabilities at all grade levels and in various job occupations. The memorandum also described strategies that agencies could take to become model employers of people with disabilities, such as reviewing all recruitment materials to ensure accessibility for people with disabilities. To help implement the strategies, OPM contracted in December 2010 with a private firm to recruit and to manage a list of Schedule A-certified individuals from which federal agencies can hire. OPM received 66 agency plans for promoting the employment of individuals with disabilities, representing over 99 percent of the federal civilian executive branch workforce. OPM officials reviewed all the plans, recording whether they met criteria developed by OPM based on the executive order and its model strategies memorandum. OPM also identified and informed agencies about innovative ideas included in plans. In reviewing the plans, OPM found that many agency plans did not meet one or more of its review criteria (see fig. 1). For example, OPM’s review found that 29 of the 66 agency plans did not include numerical goals for the hiring of people with disabilities. OPM also found that 9 of the 66 agency plans did not identify a senior-level official responsible for the development and implementation of the plan. Finally, only 7 of the 66 plans met all of the criteria; over half of the plans met 8 or fewer of the 13 criteria. However, OPM expected agencies to begin implementing their plans immediately, regardless of any unaddressed deficiencies. Agencies met some criteria more successfully than others. For example, OPM found that 40 of the 66 agency plans included a process for increasing the use of Schedule A to increase the hiring of people with disabilities. In contrast, 29 of the 66 agency plans provided for the quarterly monitoring of the rate at which employees injured on the job successfully return to work. OPM provided agencies with written feedback on plan deficiencies and strongly encouraged agencies to address them numerous times beginning in June 2011. However, 32 out of the 59 agencies with deficiencies in their plans had not addressed them as of April 2012. Specifically, in June 2011, OPM provided agencies with access to reviews of their plans, which identified deficiencies, through OMB’s Max Information System (MAX). According to OPM, in July 2011, a White House official told agency senior executives that they were required to address deficiencies in their plans. In October and November 2011, OPM provided agencies with a list of the deficiencies identified in their plans, and asked agencies to determine how their plans could be improved. In December 2011, OPM again told agencies they were strongly encouraged to review and address plan deficiencies and provided agencies with several examples of plans that met all of the criteria. Though the executive order does not specifically authorize OPM to require agencies to address plan deficiencies, it calls for OPM to regularly report on agencies’ progress in implementing their plans to the White House and others. In response to the executive order’s reporting requirement, OPM officials told us that they had briefed White House officials on issues related to agencies’ implementation of the executive order, but did not provide information on the deficiencies in all of the agency plans. In addition, OPM does not think that the federal government is on target to achieve the goals set in the executive order. While the executive order did not provide additional detail as to what information should be reported, providing information on the extent to which agencies’ plans have met OPM’s criteria would better enable the White House to hold agencies accountable for addressing plan deficiencies. In addition to reviewing agency plans, the executive order required OPM to develop mandatory training programs on the employment of people with disabilities for both human resources personnel and hiring managers, within 60 days of the executive order date. We have previously reported that training at all staff levels, in particular training on hiring, reasonable accommodations, and diversity awareness, can help disseminate leading practices throughout an agency and communicate expectations for implementation of policies and procedures related to improving employment of people with disabilities. Such policies and procedures could be communicated across the federal government with training on topics such as how to access and efficiently use the list of Schedule A- certified individuals, the availability of internships and fellowships, such as Labor’s Workforce Recruitment Program, and online communities of practice established to help officials share best practices on hiring people with disabilities, such as eFedlink. In its November 2010 model strategies memorandum to heads of executive agencies, OPM stated that, in consultation with Labor, EEOC, and OMB, it was developing the mandatory training programs required by the executive order and that further information would be forthcoming. OPM officials told us in March 2012 that they are working with federal Chief Human Capital Officers (CHCO) to develop modules on topics such as using special hiring authority that will be available through HR University. Officials explained that they need to ensure that the training is uniform to ensure all personnel receive consistent information, and they expect the training modules to be ready by August 2012. Although it has yet to fully develop mandatory training programs, OPM has taken steps to train and inform federal officials about tools available to them. For example, OPM partnered with Labor, EEOC, and other agencies to provide elective training courses for federal officials involved in implementing the executive order on topics including: the executive order, model recruitment strategies, guidance on developing disability hiring plans, and return-to-work strategies. OPM also conducted training on implementation of the executive order in July 2011 specifically for senior executives accountable for their agencies’ plans. It also offers short online videos for hiring managers on topics such as Schedule A hiring authority. Further, other governmentwide training on employing people with disabilities exists. For example, Labor’s Job Accommodation Network offers online training on relevant issues like applying the Americans with Disabilities Amendments Act and providing reasonable accommodations. Moreover, the Department of Defense’s Computer/Electronic Accommodations Program offers online training modules to help federal employees understand the benefits of hiring people with disabilities. Nevertheless, agency officials we interviewed told us that they would like to have more comprehensive training on strategies for hiring and retaining individuals with disabilities, confirming the need for OPM to complete the development of the training programs required by the executive order. For example, officials from one agency said that more training on the relationship between return-to-work efforts and providing reasonable accommodations is needed, while officials from another agency identified a need for increased awareness of the Schedule A hiring process. Executive Order 13548 requires OPM to implement a system for reporting regularly to the president, heads of agencies, and the public on agencies’ progress in implementing the objectives of the executive order. OPM is also to compile, and post on its website, governmentwide statistics on the hiring of individuals with disabilities. This is important because effectively measuring workforce demographics requires reliable data to inform decisions and to allow for individual and agencywide accountability. To measure and assess their progress towards achieving the goals of the executive order, agencies and OPM use data about disability status that employees voluntarily self-report on the SF-256. OPM’s guidance to agencies for implementing the executive order explained that the data gathered from the SF-256 is crucial for agencies to determine whether they are achieving their disability hiring goals. Agencies also report these data to EEOC in an effort to identify and develop strategies to eliminate potential barriers to equal employment opportunities. According to the form, the data are used to develop reports to bring to light agency specific or governmentwide deficiencies in the hiring, placement, and advancement of individuals with disabilities. The information is confidential and cannot be used to affect an employee in any way. Only staff who record the data in an agency’s or OPM’s personnel systems have access to the information. According to draft data from OPM, as stated earlier, the government hired approximately 20,000 employees with disabilities during fiscal years 2010 and 2011. However, according to officials at OPM, EEOC, VA, Education, and SSA, accurately measuring the number of current and newly hired employees with disabilities is an ongoing challenge. While the accuracy of the SF- 256 data is unknown, agency officials and advocates for people with disabilities believe there is an undercount of employees with disabilities. For example, despite the safeguards in place explaining the confidentiality of the data, agency officials and advocates for people with disabilities told us that some individuals with disabilities may not disclose their disability status out of concern that they will be subjected to discrimination. Similarly, EEOC reported that some persons with disabilities are reluctant to self-identify because they are concerned that such disclosure will preclude them from advancement. Additionally, some individuals may develop disabilities during federal employment and may not know how to or why they should update their disability status. We have reported that regularly encouraging employees to update their disability status allows agencies to be aware of any changes in their workforce. EEOC guidance recommends that agencies request that employees update their disability status every 2 to 4 years. As previously noted, disabled veterans with a compensable service-connected disability of 30 percent or more may be noncompetitively appointed and converted to a career appointment under 5 U.S.C. § 3112. agency’s ability to establish appropriate policies and goals, and assess progress towards those goals. Labor has taken several steps toward meeting the requirements of the executive order to improve return-to-work outcomes for employees injured on the job, including pursuing overall reform of the FECA system. Specifically, Labor developed new measures and targets to hold federal agencies accountable for improving their return-to-work outcomes within a 2-year period. Agencies were expected to improve return-to-work outcomes by 1 percent for fiscal year 2011 and an additional 2 percent in each of the following 3 years over the 2009 baseline. In fiscal year 2011, the federal government had a cumulative return-to-work rate of 91.6 percent, almost 5 percent better than the target rate of 86.7 percent. Goals such as these are useful tools to help agencies improve performance. Labor is also researching strategies that agencies can use to increase the successful return-to-work of employees who have sustained disabilities as a result of workplace injuries or illnesses. The results of this study are expected to be released in September 2012. Another Labor initiative is aimed at helping the federal government rehire injured federal workers who are not able to return to the job at which they were injured. OWCP initiated a 6-month pilot project in May 2011 to explore how Schedule A noncompetitive hiring authority might be used to rehire injured federal workers under FECA. As part of the project, OWCP provided guidance to claims staff, rehabilitation specialists, rehabilitation counselors, and employing agencies on the process of Schedule A certification and the steps it will take to facilitate Schedule A placements. According to Labor, the pilot identified obstacles to reemployment and provided input needed to determine whether such an effort can be expanded to other federal agencies. Identified obstacles included unanticipated questions from potential workers, such as if acceptance of a Schedule A designation would require a “probationary” period, and what impact acceptance of a Schedule A position would have on their retirement benefits. Of the 48 individuals Labor screened for Schedule A certification, 45 obtained certification, of whom 5 have been placed into federal employment. Each of the four agencies we reviewed submitted a plan for implementing the executive order as required. Only VA’s plan, as initially submitted, met all of OPM’s criteria for satisfying the requirements of the executive order (see table 2). Education and SSA revised their plans based on feedback from OPM. Specifically, Education’s revised plan states that Education will hire individuals with disabilities in all occupations and across all job series and grades. Education also clarified its commitment to coordinate with Labor to improve return-to-work outcomes through the POWER Initiative, and to engage and train managers on Schedule A hiring authority. Further, Education increased its goals for the percentage of job opportunity announcements that include information related to individuals with disabilities. SSA revised its plan to include goals and planned activities under the POWER Initiative, including quarterly monitoring of return-to-work successes under the program and a strategy for identifying injured employees who would benefit from reasonable accommodations and reassignment. OMB submitted its plan in March 2012 but, according to OMB officials, the agency has not received feedback from OPM. Agencies had positive views about the executive order’s requirement that they develop written plans to increase the number of federal employees with disabilities. In particular, Education, SSA, and VA said that the executive order provided an opportunity to further develop the written plans they already had in place for hiring and retaining employees with disabilities. Agencies were supportive of the goal of increasing the hiring and retention of federal employees with disabilities, and reported few challenges in implementing their plans to achieve this goal. Officials at all of the agencies we interviewed cited funding constraints as a potential obstacle to hiring more employees with disabilities. OMB officials also said that it was a challenge to identify individuals with the right skills and experience to fill their positions. For example, officials said that many of the candidates on OPM’s list of Schedule A-certified individuals have entry level skills and not the more advanced skills and experience that are required for positions at OMB. Agency officials cited no special challenges with respect to retaining employees with disabilities at their agencies. In October 2010, we reported on eight leading practices that could help the federal government become a model employer for individuals with disabilities. These practices, which are consistent with the executive order’s goal of increasing the number of individuals with disabilities in the federal government, have been implemented to varying degrees by the four agencies we contacted for this review. Top leadership commitment: Involvement of top agency leadership is necessary to overcome the resistance to change that agencies could face when trying to address attitudinal barriers to hiring individuals with disabilities. When leaders communicate their commitment throughout the organization, they send a clear message about the seriousness and business relevance of diversity management. Leaders at the agencies we talked with have, to varying degrees, communicated their commitment to hiring and retaining individuals with disabilities to their employees. Education has issued annual policy statements to its employees ensuring equal employment opportunity for all applicants and employees, including those with targeted disabilities, and officials told us that they routinely host events that address issues related to hiring and promoting equal employment opportunity. For example, in October 2008, Education hosted an event to encourage hiring individuals with disabilities and distributed a written guide about using Schedule A hiring authority to facilitate hiring individuals with targeted or severe disabilities, as well as disabled veterans. OMB officials said that it is briefing managers on the requirements of the executive order and that it planned to communicate the agency’s commitment to implementing the executive order to all staff in May 2012. SSA’s Commissioner announced his support for employing individuals with disabilities and encouraged employees to continue efforts to hire and promote these individuals in a March 2009 broadcast to all employees. VA said that the Secretary regularly communicates his commitment to hiring and retaining employees with disabilities through memorandums to all employees. In a September 2010 memorandum, the Secretary announced the agency’s goal of increasing the percentage of individuals with targeted disabilities that it hires and employs to 2 percent in fiscal year 2011. Accountability: Accountability is critical to ensuring the success of an agency’s efforts to implement leading practices and improve the employment of individuals with disabilities. To ensure accountability, agencies should set goals, determine measures to assess progress toward goals, evaluate staff and agency success in helping meet goals, and report results publicly. Education, SSA and VA’s disability hiring plans all include goals that will allow them to measure their progress toward meeting the goals of the executive order. Prior to the executive order, Education issued a Disability Employment Program Strategic Plan for fiscal years 2011-2013 that established goals related to reasonable accommodations, and recruitment and retention, and offered strategies for meeting these goals, as well as ways to track and measure agency progress. At SSA, accountability for results related to the executive order is included in the performance plan of the senior-level official responsible for implementing it. VA specifically holds senior executives accountable for meeting agency numerical goals by including these goals in their contracts. Additionally, VA senior executives’ contracts include a performance element for meeting hiring goals for individuals with targeted disabilities. OMB has not yet developed such goals. Regular surveying of the workforce on disability issues: Regularly surveying their workforces allows agencies to have more information about potential barriers to employment for people with disabilities, the effectiveness of their reasonable accommodation practices, and the extent to which employees with disabilities find the work environment friendly. To collect this information, agencies should survey their workforces at all stages of their employment, including asking employees to complete the SF-256 when they are hired, and asking relevant questions on employee feedback surveys and in exit interviews. VA officials said that they encourage new employees to complete the SF- 256, and SSA reminds all employees to annually review their human resource records and update or correct information, including disability data. In addition, all of the agencies we contacted survey employees to solicit feedback on a range of topics. However, only SSA and VA include a question on disability status or reasonable accommodations on these surveys. In addition, Education and SSA said that they routinely conduct exit surveys to solicit information from employees who separate from service about their reasons for leaving. While VA has an exit survey, officials said it is not consistently administered to all employees who separate. Education officials said that they have additional means of obtaining information about barriers for employees with disabilities. For example, senior managers hold open forums with staff, and employees can submit feedback to management through the agency’s Intranet. Education officials also reported that employees with disabilities have formed their own group to address access to assistive technology, which has helped Education to obtain improved technology, such as videophones. OMB officials said that their Diversity Council and Personnel Advisory Board provide forums for employees to discuss diversity issues, including those related to disabilities, and share them with senior leadership. Better coordination of roles and responsibilities: Often the responsibilities related to employment of people with disabilities are dispersed, which can create barriers to hiring if agency staff defer taking action, thinking that it is someone else’s responsibility. Coordination across agencies can encourage agencies with special expertise in addressing employment obstacles for individuals with disabilities to share their knowledge with agencies that have not yet developed this expertise. All of the agencies we interviewed had, to some extent, coordinated within and across agencies to improve their recruitment and retention efforts. Specifically, each agency has a designated section 508 coordinator who assists the agency in ensuring that, as required by section 508 of the Rehabilitation Act, employees with disabilities have access to information and data that are comparable to that provided to those without disabilities. In addition, each agency has a single office or primary point of contact that is responsible for overseeing activities related to hiring and retaining employees with disabilities. Officials at all of the agencies we talked to said their agencies engaged in one or more interagency efforts to address disability issues. All of these agencies participate in the CHCO Council, which facilitates sharing of best practices and challenges related to human capital issues, including those related to employees with disabilities. In addition, Education, OMB and SSA officials said that they work with state vocational rehabilitation agencies, which can help them identify accommodations that may be needed for new hires with disabilities. Education and SSA also participate in the Federal Disability Workforce Consortium, an interagency partnership working to improve recruitment, hiring, retention, and advancement of individuals with disabilities by sharing information on disability employment issues across government. SSA and VA have also participated in the Workforce Recruitment Program for College Students VA and Education have also worked together to assist with Disabilities; disabled veterans by providing unpaid work experience at Education, which may lead to permanent employment. Managed by Labor’s Office of Disability Employment Policy and the Department of Defense’s Office of Diversity Management and Equal Opportunity, this program is a recruitment and referral effort that connects federal sector employers nationwide with highly motivated college students and recent graduates with disabilities. said that the site is useful for seeing what other agencies are doing, and that they have also shared their own practices on the site. Training for staff at all levels: Agencies can leverage training to communicate expectations about implementation of policies and procedures related to improving employment of people with disabilities, and help disseminate leading practices that can help improve outcomes. All of the selected agencies provide some training for staff at all levels on the importance of workforce diversity. They also require managers and supervisors to take training on hiring procedures related to individuals with disabilities, and the use of Schedule A hiring authority. In addition, VA requires employees at all levels to take training specifically devoted to the legal rights of individuals with disabilities. At Education, this training is required for managers and supervisors, while at SSA it is available but optional for all employees. Career development opportunities: Opportunities for employees with disabilities to participate in work details, rotational assignments, and mentoring programs can lead to increased retention and improved employee satisfaction, and improve employment outcomes by helping managers identify employees with high potential. All of the agencies we interviewed provided special work details or rotational assignments for all employees; one reported having a program exclusively for those with disabilities. Specifically, Education uses Project SEARCH to provide internships for students with disabilities to help them become ready to work through on-the-job training. Education officials reported that some of these internships have led to permanent employment at Education. A flexible work environment: Flexible work schedules, telework, and other types of reasonable accommodations are valuable tools for the recruitment and retention of employees, regardless of disability status. Such arrangements can make it easier for employees with health impairments to successfully function in the work environment or facilitate an injured employee’s return to work. All of the agencies we interviewed provide flexible work arrangements, including flexible work schedules and teleworking. These agencies also make assistive technologies, such as screen reader software, available for employees with disabilities, which can facilitate their ability to take advantage of flexible work arrangements. Education, OMB, and SSA also offer all employees opportunities for job sharing. Centralized funding for reasonable accommodations: Having a central budget at the highest level of the agency can help ensure that employees with disabilities have access to reasonable accommodations by removing these expenses from local operational budgets and thus reducing managers’ concerns about their costs. Education, SSA, and VA use centralized funding accounts to pay for reasonable accommodations for employees with disabilities. At Education, a centralized fund is usually used to cover expenses related to providing readers, interpreters, and personal attendants. However, in cases where these services are needed on a daily basis, Education may require the operating unit to hire someone full-time and pay for this from their unit budget. OMB provides funding from its own budget to pay for reasonable accommodations, rather than receiving funding from the Executive Office of the President. OMB officials also told us that they also have been able to rely on the Department of Defense’s Computer/Electronic Accommodations Program to help provide reasonable accommodations for some of the employees. This program facilitates access to assistive technology and services to people with disabilities, federal managers, supervisors, and information technology professionals by providing a single point of access for executive branch agencies. As the nation’s largest employer, the federal government has the opportunity to be a model for the employment of people with disabilities. Consistent with the July 2010 executive order, OPM, Labor, and other agencies have helped provide the framework for federal agencies to take proactive steps to improve the hiring and retention of persons with disabilities. However, nearly 2 years after the executive order was signed, the federal government is not on track to achieve the executive order’s goals. Although federal agencies have taken the first step by submitting action plans to OPM for review, many agency plans do not meet the criteria identified by OPM as essential to becoming a model employer of people with disabilities. Though the executive order does not specifically authorize OPM to require agencies to address deficiencies, regularly reporting to the president and others on agency progress in addressing these deficiencies may compel agencies to address them and better position the federal government to reach the goals of the executive order. Further, officials responsible for hiring at federal agencies need to acquire the necessary knowledge and skills to proactively recruit, hire, and retain individuals with disabilities. Agency officials we spoke with said more comprehensive training on the tools available to them, including the requirements of Schedule A hiring authority, is needed. While the mandatory training program remains in development, until it is fully developed and communicated to agencies, opportunities to better inform relevant agency officials on how to increase the employment of individuals with disabilities may be missed. Finally, concerns have been raised by stakeholders, including EEOC, OPM, and advocates for people with disabilities, about the reliability of government statistics on the number of individuals with disabilities in the federal government. Most of the concerns focus on the likelihood of underreporting given the reliance on voluntary disclosure, but the extent of the underreporting is unknown. Unreliable data hinder OPM’s ability to measure the population of federal workers with disabilities and may prevent the federal government from developing needed policies and procedures that support efforts to become a model employer of people with disabilities. Determining the accuracy of SF-256 data, for example, by examining the extent to which employees voluntarily disclose their disability status and reasons for nondisclosure, is an essential step for ensuring that OPM can measure progress towards the executive order’s goals. To ensure that the federal government is well positioned to become a model employer of individuals with disabilities, we recommend that the Director of OPM take the following three actions: 1. Incorporate information about plan deficiencies into its regular reporting to the president on agencies’ progress in implementing their plans, and inform agencies about this process to better ensure that the plan deficiencies are addressed. 2. Expedite the development of the mandatory training programs for hiring managers and human resource personnel on the employment of individuals with disabilities, as required by the executive order. 3. Assess the extent to which the SF-256 accurately measures progress toward the executive order’s goal and explore options for improving the accuracy of SF-256 reporting, if needed, including strategies for encouraging employees to voluntarily disclose their disability status. Any such strategies must comply with legal standards governing disability-related inquiries, including ensuring that employee rights to voluntarily disclose a disability are not infringed upon. We provided a draft of this report to Education, EEOC, Labor, OMB, OPM, SSA, and VA for review and comment. In written comments, OPM agreed with findings and recommendations identified in the report, and described actions being implemented in an effort to address them. To better ensure agencies address deficiencies identified in their disability hiring plans, OPM has begun notifying agencies that it plans to report remaining deficiencies to the president and on the OPM website by August 2012. With regard to the need to expedite the development of the mandatory training program, OPM, in coordination with partner agencies has identified training for hiring managers and supervisors, and Human Resource personnel. Finally, OPM stated that it is engaged in discussions with the White House and stakeholder agencies to better define questions on the SF-256 to increase response rates. OPM also said it plans to work with EEOC and Labor to develop guidance for agencies to encourage voluntary self-disclosure through annual re-surveying of the workforce and providing employees with the option to complete the SF-256 when they request a reasonable accommodation. OPM expects to complete these efforts by January 2013. While these actions may help improve the accuracy of the SF-256 data, we think taking steps to assess the accuracy of the data will enhance OPM’s efforts. For example, understanding the extent to which employees do not voluntarily self- disclose their disability status and the reasons why may help target the messages agencies can use to encourage voluntary self-disclosure. Without such an understanding, OPM and agencies may miss opportunities to increase the accuracy of the data collected on the SF- 256. Education, EEOC, OMB, OPM, and SSA provided technical comments, which have been incorporated into the report as appropriate. Labor and VA had no comments. We are sending copies of this report to Education, EEOC, Labor, OMB, OPM, SSA, and VA and to the appropriate congressional committees and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Yvonne Jones at (202) 512-2717 or JonesY@gao.gov, or Daniel Bertoni at (202) 512-7215 or BertoniD@gao.gov. Contact information for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Daniel Bertoni, (202) 512-7215, bertonid@gao.gov. Yvonne D. Jones, (202) 512-2717, jonesy@gao.gov. In addition to the contacts named above, Neil Pinney, Assistant Director; Debra Prescott, Assistant Director; Charlesetta Bailey; Benjamin Crawford; Catherine Croake; Karin Fangman; David Forgosh; Robert Gebhart; Michele Grgich; Amy Radovich; Terry Richardson; and Regina Santucci made key contributions to this report. Federal Employees’ Compensation Act: Preliminary Observations on Fraud-Prevention Controls. GAO-12-402. Washington, D.C.: January 25, 2012. Coast Guard: Continued Improvements Needed to Address Potential Barriers to Equal Employment Opportunity. GAO-12-135. Washington, D.C.: December 6, 2011. Federal Workforce: Practices to Increase the Employment of Individuals with Disabilities. GAO-11-351T. Washington, D.C.: February 16, 2011. Highlights of a Forum: Participant-Identified Leading Practices That Could Increase the Employment of Individuals with Disabilities in the Federal Workforce. GAO-11-81SP. Washington, D.C.: October 5, 2010. Highlights of a Forum: Actions that Could Increase Work Participation for Adults with Disabilities. GAO-10-812SP. Washington, D.C.: July 29, 2010. Federal Disability Programs: Coordination Could Facilitate Better Data Collection to Assess the Status of People with Disabilities. GAO-08-872T. Washington, D.C.: June 4, 2008. Federal Disability Programs: More Strategic Coordination Could Help Overcome Challenges to Needed Transformation. GAO-08-635. Washington, D.C.: May 20, 2008. Highlights of a Forum: Modernizing Federal Disability Policy. GAO-07-934SP. Washington, D.C.: August 3, 2007. Equal Employment Opportunity: Improved Coordination Needed between EEOC and OPM in Leading Federal Workplace EEO. GAO-06-214. Washington, D.C.: June 16, 2006. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005. | In July 2010, the president signed Executive Order 13548 committing the federal government to become a model employer of individuals with disabilities and assigned primary oversight responsibilities to OPM and Labor. According to OPM, the federal government is not on track to meet the goals of the executive order, which committed the federal government to hire 100,000 workers with disabilities over the next 5 years. GAO was asked to examine the efforts that (1) OPM and Labor have made in overseeing federal efforts to implement the executive order; and (2) selected agencies have taken to implement the executive order and to adopt leading practices for hiring and retaining employees with disabilities. To conduct this work, GAO reviewed relevant agency documents and interviewed appropriate agency officials. GAO conducted case studies at Education, SSA, VA, and OMB. The Office of Personnel Management (OPM) and the Department of Labor (Labor) have taken steps to implement the executive order and help agencies recruit, hire, and retain more employees with disabilities. OPM provided guidance to help agencies develop disability hiring plans and reviewed the 66 plans submitted. OPM identified deficiencies in most of the plans. For example, though 40 of 66 agencies included a process for increasing the use of a special hiring authority to increase the hiring of people with disabilities, 59 agencies did not meet all of OPMs review criteria, and 32 agencies had not addressed plan deficiencies as of April 2012. In response to executive order reporting requirements, OPM officials said they had briefed the White House on issues related to implementation, but they did not provide information on deficiencies in all plans. While the order does not specify what information these reports should include beyond addressing progress, providing information on deficiencies would enable the White House to hold agencies accountable. OPM is still developing the mandatory training programs for officials on the employment of individuals with disabilities, as required by the executive order. Several elective training efforts exist to help agencies hire and retain employees with disabilities, but agency officials said that more information would help them better use available tools. To track and measure progress towards meeting the executive orders goals, OPM relies on employees to voluntarily disclose a disability. Yet, agency officials, including OPMs, are concerned about the quality of the data. For example, agency officials noted that people may not disclose their disability due to concerns about how the information may be used. Without quality data, agencies may be challenged to effectively implement and assess the impact of their disability hiring plans. The Department of Education (Education), Social Security Administration (SSA), Office of Management and Budget (OMB), and Department of Veterans Affairs (VA) submitted disability hiring plans, and have taken steps to implement leading practices for increasing employment of individuals with disabilities, such as demonstrating top leadership commitment. The executive order provided SSA, VA, and Education an opportunity to further develop existing written plans. However, officials at these agencies cited funding constraints as a potential obstacle to hiring more employees with disabilities. In terms of leading practices, all four agencies have communicated their commitment to hiring and retaining individuals with disabilities and coordinated within or across other agencies to improve their recruitment and retention efforts. For example, each agency has a single point of contact to help ensure that employees with disabilities have access to information that is comparable to that provided to those without disabilities, and for overseeing activities related to hiring and retaining employees with disabilities. In addition, VA holds senior managers accountable for meeting hiring goals by including targets in their contracts. Each agency requires training for managers and supervisors on procedures for hiring individuals with disabilities, and VA further requires that all employees receive training on the legal rights of individuals with disabilities. Education, SSA, and VA rely on centralized funding accounts to pay for reasonable accommodations. GAO recommends that OPM: (1) incorporate information about plan deficiencies into its required regular reporting to the president on implementing the executive order and inform agencies about this process; (2) expedite the development of the mandatory training programs required by the executive order; and (3) assess the accuracy of the data used to measure progress toward the executive orders goals and, if needed, explore options for improving its ability to measure the population of federal employees with disabilities, including strategies for encouraging employees to voluntarily disclose disability status. OPM agreed with GAOs recommendations. |
NCIC is a law enforcement database maintained by the FBI’s Criminal Justice Information Services (CJIS) Division and was first established in 1967 to assist LEAs in apprehending fugitives and locating stolen property. In 1975, NCIC expanded to include the missing persons file to include law enforcement records associated with missing children and certain at-risk adults. The missing persons file contains records for individuals reported missing who: (1) have a proven physical or mental disability; (2) are missing under circumstances indicating that they may be in physical danger; (3) are missing after a catastrophe; (4) are missing under circumstances indicating their disappearance may not have been voluntary; (5) are under the age of 21 and do not meet the above criteria; or (6) are 21 and older and do not meet any of the above criteria but for whom there is a reasonable concern for their safety. The unidentified persons file was implemented in 1983 to include law enforcement records associated with unidentified remains and living individuals who cannot be identified, such as those individuals who cannot identify themselves, including infants or individuals with amnesia. When a missing persons record is entered or modified, NCIC automatically compares the data in that record against all unidentified persons records in NCIC. These comparisons are performed daily on the records that were entered or modified on the previous day. If a potential match is identified through this process, the agency responsible for entering the record is notified. Management of NCIC is shared between CJIS and the authorized federal, state, and local agencies that access the system. CJIS Systems Agencies (CSA)—criminal justice agencies with overall responsibility for the administration and usage of NCIC within a district, state, territory, or federal agency—provide local governance of NCIC use. A CSA generally operates its own computer systems, determines what agencies within its jurisdiction may access and enter information into NCIC, and is responsible for assuring LEA compliance with operating procedures within its jurisdiction. An Advisory Policy Board, with representatives from criminal justice and national security agencies throughout the United States, and working groups are responsible for establishing policy for NCIC use by federal, state, and local agencies and providing advice and guidance on all CJIS Division programs, including NCIC. NamUs became operational in 2009, and was designed to improve access to database information by people who can help solve long-term missing and unidentified persons cases—those cases that have been open for 30 days or more. NamUs is comprised of three internet-based data repositories that can be used by law enforcement, medical examiners, coroners, victim advocates or family members, and the general public to enter and search for information on missing and unidentified persons cases. These repositories include the missing person database (NamUs-MP), the unidentified person database (NamUs-UP), and the unclaimed persons database. NamUs-MP and NamUs-UP allow automated and manual comparison of the case records contained in each. The University of North Texas Health Science Center, Center for Human Identification (UNTCHI) has managed and administered the NamUs program under a cooperative agreement with NIJ since October 2011. Two Directors within UNTCHI’s Forensic and Investigative Services Unit are responsible for daily management, oversight, and planning associated with NamUs. Additionally, eight regional system administrators (RSAs) and eight forensic specialists provide individualized case support. To gain access to NCIC, an agency must have authorization under federal law and obtain an Originating Agency Identifier (ORI). In general, to be authorized under federal law for full access to NCIC, an agency must be a governmental agency that meets the definition of a CJA. Specifically, data stored in NCIC is “criminal justice agency information and access to that data is restricted to duly authorized users,” namely CJAs as defined in regulation. The CJIS Security Policy allows data associated with the missing and unidentified persons files to be disclosed to and used by government agencies for official purposes or private entities granted access by law. For example, there is a specific provision that allows these files to be disclosed to the National Center for Missing and Exploited Children, a nongovernmental organization, to assist in its efforts to operate a nationwide missing children hotline, among other things. As of February 2016, there were almost 118,000 active ORI numbers that granted authorized agencies at least limited access to NCIC. Table 1 shows the different types of users granted ORI numbers to access NCIC and their associated access levels. Unlike NCIC, any member of the public may register to use NamUs and access published case information. When cases are entered, the RSA carries out a validation process by reviewing each case entered within his or her region to ensure the validity and accuracy of the information provided and determine whether the case may be published to the public website. Before any case may be publicly published to the NamUs site, the RSA must confirm the validity of that case with the LEA or other responsible official with jurisdiction by obtaining an LEA case number or an NCIC number. The RSA also vets registration applications for non- public users—professionals affiliated with agencies responsible for missing or unidentified persons cases. In addition to the published case information, these non-public registered users may also access unpublished case information. Table 2 shows the types of individuals that may register as NamUs users for the missing persons and unidentified persons files, and their access levels. NCIC data include criminal justice agency information and access to such data is restricted by law to only authorized users. Because many users of NamUs are not authorized to access NCIC, there are no direct links or data transfers between the systems. In addition, NCIC and NamUs only contain information manually entered by their respective authorized users. As a result, while both NCIC and NamUs contain information on long-term missing and unidentified persons, they remain separate systems. DOJ could facilitate more efficient sharing of information on missing and unidentified persons cases contained in NCIC and NamUs. The two systems have overlapping purposes specifically with regard to data associated with long-term missing and unidentified persons cases—both systems collect and manage data that officials can use to solve these cases. Further, three key characteristics of NCIC and NamUs—the systems’ records, registered users, and data validation efforts—are fragmented or overlapping, creating the risk of duplication. We found that, as CJIS and NIJ proceed with planned upgrades to both databases, opportunities may exist to more efficiently use data related to missing and unidentified persons cases, in part because no mechanism currently exists to share information between NCIC and NamUs. Figure 3 below describes the purpose of each system and explains how certain characteristics contribute to fragmentation, overlap, or both. See appendix II for a non-interactive version of figure 3. Interactive graphic Figure 3: Comparison of Fragmentation and Overlap in Key Characteristics of the National Crime Information Center (NCIC) and National Missing and Unidentified Persons System (NamUs) Move mouse over headers for description. For a noninteractive version, please see appendix II. Database Records: NCIC and NamUs contain fragmented information associated with long-term missing and unidentified persons. Specifically, information about long-term missing or unidentified persons may be captured in one system, but not the other. As a result, if users do not have access to or consult the missing and unidentified persons files in both data systems, they may miss vital evidence that could help to solve a given case. For example, in fiscal year 2015, 3,170 missing persons cases were reported to NamUs. During the same time period, 84,401 of the missing persons records reported to NCIC remained open after 30 days and became long-term cases. Conversely, in fiscal year 2015, 1,205 unidentified persons cases were reported to NamUs, while 830 records were reported to NCIC. NamUs also accepts and maintains records of missing and unidentified persons cases that are not published on its public website, in part because they may not meet criteria for entry into NCIC. According to NamUs officials, cases may remain unpublished for several reasons, including (1) they are undergoing the validation process, (2) they lack information required to complete the entry, (3) the responsible agency has requested the report go unpublished for investigative reasons, (4) a report has not been filed with law enforcement, or (5) law enforcement does not consider the person missing. For example, according to NamUs officials, a non-profit agency entered approximately 800 missing migrant cases that have remained unpublished on the NamUs public website because they do not have active law enforcement investigations associated with the cases. Because they do not have active law enforcement investigations on file and NCIC only accepts documented criminal justice information, it is highly unlikely that these approximately 800 cases are present in NCIC. Since access to unpublished cases is limited to authorized LEA and medicolegal investigators that have registered as NamUs users, investigators using only NCIC cannot use information from these NamUs cases to assist in solving unidentified persons cases. In addition, the number of NCIC cases that are also recorded in NamUs varies greatly among states, further contributing to fragmentation. For example, of the long-term missing persons cases officials in each state reported to NCIC in fiscal year 2015, the proportion of these NCIC cases that were also recorded in NamUs ranged from less than 1 to almost 40 percent. However, in our nongeneralizeable review of laws in Arizona, California, and New York, the state laws specifically associated with reporting missing persons cases to NCIC or NamUs did not contribute to variation in reporting rates. Specifically, in fiscal year 2015, approximately 2 to 3.5 percent of the long-term cases reported by officials in each state to NCIC were ultimately reported to NamUs. These reporting rates are very similar despite the fact that, as discussed previously, we chose these three states because they had different reporting requirements associated with reporting missing and unidentified persons. Registered Users: Fragmentation between the records reported to NCIC and NamUs also exists because different user groups with different responsibilities enter data on missing and unidentified persons. The fact that different user bases report information to each system means that certain types of cases may be found in one system but not the other. This creates inefficiencies for officials seeking to solve long-term missing and unidentified persons cases who have to enter information and search both systems to get all the available information. Further, the NCIC user base is significantly larger than the NamUs user base, which likely contributes to the discrepancies in the number of long-term missing persons cases reported to each system. As of February 2016, almost 118,000 agencies had at least limited access to NCIC, with approximately 113,000 granted full access to all 21 NCIC files, including the missing and unidentified persons files. As of November 2015, just over 3,000 individuals were registered as non- public users of NamUs-MP and approximately 2,000 individuals were registered as non-public users of NamUs-UP. These registered users represent at least 1,990 agencies, less than 2 percent of the number of agencies registered to use NCIC. In 1996, a person was reported missing and the case was entered into National Crime Information Center (NCIC). Three days later, a decomposed body was found a few miles away; however, no police report was ever generated for the person’s death nor was an entry made into NCIC. In 2013, the detective following up on the missing person case searched National Missing and Unidentified Persons System (NamUs) and found that a medical examiner had entered the unidentified remains case into NamUs. As a result, 16 years after the missing persons case was originally reported, DNA testing verified a match between the unidentified remains reported by a medical examiner to NamUs and the missing person case reported by law enforcement to NCIC in 1996. In addition to the difference in the number of agencies registered to use NCIC or NamUs, there is variation in the types of agencies that are registered with each system, possibly contributing to differences in the type of case information reported. For instance, NamUs has a larger number of registered users in the medicolegal field (either as medical examiners, coroners, forensic odontologists, or other forensic personnel), which may explain why a greater number of unidentified persons cases are reported to NamUs. Specifically, while medical examiners and coroners represent less than 0.1 percent of NCIC’s total active ORIs, approximately 18 percent of agencies registered with NamUs have at least one user registered in the medicolegal field. Similarly, virtually all LEAs use NCIC, with only a small fraction registered to use NamUs, likely contributing to the low proportion of long term missing persons cases reported to both NCIC and NamUs by LEAs. Additionally, members of the public who do not have access to NCIC and are not affiliated with any type of agency can report missing persons cases to NamUs. The variation in the types of users registered with NCIC or NamUs ultimately limits the usefulness of either system, as important case information may be missed by individuals who do not access both systems. According to one LEA official we spoke with, his unit has had more than a dozen resolutions of cold cases as a result of information contained in NamUs since NamUs was established in 2009. Data Validation Efforts: NamUs uses a validation process to ensure that all missing and unidentified persons cases include either the local LEA case number or an NCIC number before they are published to the public website. NamUs also has some ad hoc processes in place, beyond routine RSA responsibilities, designed to help ensure that data in selected states on missing and unidentified persons contained in NCIC are captured by NamUs. However, while intended in part to minimize fragmentation, these processes introduce additional inefficiencies caused by overlapping and potentially duplicative activities. Specifically, as part of the NamUs validation process, at least once a year, the RSA requests records from NCIC and manually reviews the data in both systems to ensure consistency. For example, from January 2015 through September 2015, RSAs requested and manually reviewed statewide NCIC records for at least 22,000 missing persons and 4,532 unidentified persons cases to ensure that if cases entered into NamUs were present in NCIC, the two systems contained comparable information. According to NIJ officials, if RSAs identify errors or missing information in an NCIC record during the course of their work, they will alert the agency responsible for the case. It is then the responsibility of that agency to enter or update the NCIC record. The potential for duplication also exists when agencies want to utilize both NCIC and NamUs. For example, if agencies with access wanted their case data to exist in both systems, the system limitations would require them to enter the information in one system and then enter the same data in the second system, resulting in duplicative data entry. Officials from one state agency we interviewed noted that they have a full time employee who is solely responsible for entering case data into NamUs after it has been entered into NCIC. Further, when attempting to use information from either NCIC or NamUs, users are required to access and search each system separately, and then manually compare results. Fragmentation and overlap between NCIC and NamUs result in inefficiencies primarily because there is no systematic mechanism for sharing information between the systems. According to CJIS officials, in lieu of a systematic sharing of information mechanism, they created a standard search that state and local agencies can use to request an extract of all of their missing and unidentified persons data contained in NCIC. Upon receipt of the resulting data extract, the requesting agency would then be responsible for entering the provided data into NamUs. However, this solution to share information does not address the inefficiencies created by the lack of an automated mechanism, as it requires additional work on the part of responsible officials and results in the potential for duplication. We have previously reported that when fragmentation or overlap exists, there may be opportunities to increase efficiency. In particular, our prior work identified management approaches that may improve efficiency, including implementing process improvement methods and technology improvements while documenting such efforts to help ensure operations are carried out as intended. Additionally, we have reported that federal agencies have hundreds of incompatible information-technology networks and systems that hinder governmentwide sharing of information and, as a result, information technology solutions can be identified to help increase the efficiency and effectiveness of these systems. According to CJIS officials, the most significant limiting factors to a systematic sharing of information mechanism between NCIC and NamUs are that (1) access to NCIC is restricted to authorized users, (2) NamUs has not been granted specific access to NCIC by law, and (3) NamUs has a public interface. Because NamUs lacks specific statutory authority to access NCIC and the public is prohibited from accessing NCIC data, CJIS officials stated that fully exchanging data with NamUs would constitute an unauthorized dissemination of NCIC information. As a result, these officials stated that the CJIS Advisory Policy Board determined that NCIC could not be fully connected to NamUs. While there are statutory limitations regarding direct access to NCIC, there may be options to better share information that are technically and legally feasible. Thus, opportunities may exist within the current statutory framework to address fragmentation and overlap between the two systems. Our review of the data elements required by each system indicates a high degree of commonality between the data that can be collected by NCIC and NamUs, which could help facilitate the sharing of information. Specifically, 12 of the 15 data fields required by NamUs for a missing persons case and 12 of the 14 data fields required by NamUs for an unidentified persons case are also present in NCIC. Further, stakeholders we interviewed from three states offered a variety of solutions to address the fragmentation and overlap between NCIC and NamUs. For example, A law enforcement official in one state noted that a notification alert could be added to NCIC to inform users when related case data was also present in NamUs. Another official stated that a query process that allowed authorized users to search information from both systems simultaneously would be helpful in minimizing the need to regularly check both systems. According to CJIS officials, a joint search function would likely require the systems to be fully integrated; however, CJIS officials noted that they had not formally evaluated the option because they believe it is currently precluded by federal law. While full integration of the two systems may be precluded, a joint search function may not equate to full integration. Authorized users with access to both systems could benefit from the efficiencies of such a search function. However, DOJ will not know whether this type of function could be technically or legally feasible until it evaluates the option. Implementing mechanisms to share information without fully integrating the systems could help improve the efficiency of efforts to solve long-term missing and unidentified persons cases using NCIC and NamUs. Officials in another state suggested that a single data entry point could be used to populate both NCIC and NamUs to minimize duplicate data entry. This solution to share information has also been put forward as a requirement in several bills that have been introduced in Congress since 2009. In 2010, DOJ undertook an effort in response to the requirement in proposed legislation to determine whether it would be technically possible for a check box to be added to NCIC that would allow users to indicate that they would like the case information to be automatically entered into NamUs as well. According to CJIS officials, this type of check box is already in use for other NCIC files, which means it could be technically feasible for the missing and unidentified persons files. However, according to CJIS officials, this system change was not pursued for the missing and unidentified persons files because the proposed legislation did not pass, and consequently there was no legal requirement that CJIS implement this mechanism to share information. Nevertheless, without evaluating this mechanism, DOJ will not know whether it is technically and legally feasible. As a result, DOJ may be missing an opportunity to share information between NCIC and NamUs that would better help users close their missing or unidentified persons cases. Both NCIC and NamUs are in the early stages of upgrading their systems; however, neither effort includes plans to improve sharing information between these systems. These ongoing upgrade processes provide DOJ with an opportunity to evaluate and document the technical and legal feasibility of options to improve sharing NCIC and NamUs missing and unidentified persons information, and to integrate appropriate changes, if any, into the next versions of the systems. According to NIJ officials, the discovery phase of the NamUs upgrade to NamUs 2.0 has been completed, and officials have developed a prioritized list of 793 items that they would like to include in the upgrade. The feasibility of each item and timelines for implementation will be determined in an iterative process based on time and funding considerations. According to the officials, the highest priority items are related to enhancing the existing capabilities of NamUs to make them more efficient and user-friendly. Our review of the prioritization document does not indicate that efforts to improve sharing of information with NCIC are included in the ongoing upgrade. NIJ officials stated that their goal for the upgrade is to share data more easily with a variety of state and local systems. According to CJIS officials, the upgrade process for NCIC began in 2014, with a canvas of 500 state, local, tribal, and federal NCIC users to identify the type of functionality users would like to see included in an updated system. The officials said that this process yielded more than 5,500 recommendations related to all 21 files contained in NCIC. CJIS officials did not specify how many recommendations were related to the missing and unidentified persons files, but did note that they received some feedback related to improving the ability to share data with NamUs. Based on the user canvas, CJIS developed a high-level concept paper that will be discussed at the Advisory Policy Board’s June 2016 meeting. Following Advisory Policy Board approval, CJIS will begin the development process, including identifying specific tasks. CJIS officials explained that because of the uncertainty regarding approval, and the way in which the upgrade development process will be structured, there are no specific timeframes available related to the update. The officials stated it will likely be several years before there are any deliverables associated with the effort. While we understand there are statutory restrictions regarding access to NCIC that must be adhered to, and we recognize that stakeholders may use NCIC and NamUs in distinct ways, DOJ has opportunities to explore available options that could potentially allow for more efficient use of information on missing and unidentified persons by reducing fragmentation and overlap. Without evaluating the technical and legal feasibility of options for sharing information, documenting the results of the evaluation, and, as appropriate, implementing one or more of these options, potential inefficiencies will persist. As a result, users who do not have access to information from both systems may continue to miss vital case information. Every year, more than 600,000 people are reported missing, and hundreds of sets of human remains go unidentified. Solving thousands of long-term missing and unidentified persons cases requires the coordinated use of case data contained in national databases, such as NCIC and NamUs. However, because no mechanism exists to share information between these systems, the fragmented and overlapping nature of the systems leads to inefficiencies in solving cases. Although there are statutory differences between the systems, there are potential options for sharing information—such as a notification to inform NCIC users if related case data were present in NamUs —that could reduce inefficiencies between NCIC and NamUs within the existing legal framework. The ongoing upgrade processes for both systems provide DOJ with the opportunity to evaluate the technical and legal feasibility of various options, document the results, and incorporate feasible options, as appropriate. Without doing so, and without subsequently implementing options determined to be appropriate during the next cycle of system upgrades, potential inefficiencies will persist and users who do not have access to information from both systems may be missing vital information that could be used to solve cases. To allow for more efficient use of data on missing and unidentified persons contained in the NCIC’s Missing Persons and Unidentified Persons files and NamUs, the Directors of the FBI and NIJ should evaluate the feasibility of sharing certain information among authorized users, document the results of this evaluation, and incorporate, as appropriate, legally and technically feasible options for sharing the information. We provided a draft of this product to DOJ for review and comment. On May 13, 2016, an official with DOJ’s Justice Management Division sent us an email stating that DOJ disagreed with our recommendation, because DOJ believes it does not have the legal authority to fulfill the corrective action as described in the proposed recommendation. Specifically, DOJ stated that NamUs does not qualify, under federal law, for access to NCIC and is not an authorized user to receive NCIC data. Therefore, DOJ does not believe there is value in evaluating the technical feasibility of integrating NamUs and NCIC. As stated throughout this report, we understand the legal framework placed on NCIC and that it may be restricted from fully integrating with a public database. However, this statutory restriction does not preclude DOJ from exploring options to more efficiently share information within the confines of the current legal framework. Moreover, our recommendation is not about the technical feasibility of integrating NCIC and NamUs but about studying whether there are both technically and legally feasible options for better sharing long-term missing and unidentified persons information. We continue to believe that there may be mechanisms for better sharing this information—such as a notification alert in NCIC to inform users when related case data is also present in NamUs—that would comply with the legal restrictions. However, until DOJ studies whether such feasible mechanisms exist, it will be unable to make this determination. Without evaluating the technical and legal feasibility of options for sharing information, DOJ risks continued inefficiencies through fragmentation and overlap. Moreover, authorized users who do not have automated or timesaving access to information from both systems may continue to miss critical information that would help solve these cases. DOJ also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Attorney General of the United States, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. In response to Senate Report 113-181 (accompanying the Consolidated and Further Continuing Appropriations Act of 2015) this report addresses the following objectives: 1. Describe access to and use of missing and unidentified persons information contained in the National Crime Information Center (NCIC) and the National Missing and Unidentified Persons System (NamUs). 2. To what extent do opportunities exist to improve the use of missing and unidentified persons information contained in NCIC and NamUs? To describe the access to and use of missing and unidentified persons information contained in NCIC and NamUs, we reviewed and compared NCIC and NamUs operating and policy manuals and data entry guides. In addition, we observed access to and use of missing and unidentified persons information in NamUs. To corroborate information above, we conducted interviews with officials who access and use NCIC and NamUs, including state criminal justice agencies, state and local law enforcement agencies (LEA), medical examiners, and coroners. To determine the extent to which opportunities exist to improve the use of missing and unidentified persons information using NCIC and NamUs, we analyzed summary level case data by state for each system for fiscal year 2015. Because of statutory limitations on access to criminal justice information contained in NCIC we did not assess record level case data from either NCIC or NamUs. However, we compared NCIC summary level data to NamUs summary level data, and found it sufficient for demonstrating the extent to which information contained in the two systems is similar or different. We assessed the reliability of the data contained in NCIC and NamUs by, among other things, reviewing database operating manuals and quality assurance protocols, and by interviewing officials responsible for managing the systems. We found the data to be reliable for our purposes. We also reviewed and compared NCIC and NamUs operating manuals and data entry guides to determine the comparability of minimum data requirements for record entry, individual data elements in each system, and their definitions. Our review of these documents allowed us to identify details about the purpose and design of each system that may support or preclude data sharing. In addition, we reviewed past and current CJIS and NIJ plans related to sharing information between NCIC and NamUs. We reviewed laws, policies, and information associated with reporting and sharing information on missing and unidentified persons, to include information about the types of users that can access or enter information into each system within three categories: (1) LEA, (2) non-LEA criminal justice agency (CJA)—such as a court; and (3) medicolegal investigator— such as a coroner. We assessed this information against Standards for Internal Control in the Federal Government and GAO’s evaluation and management guide for fragmentation, overlap, and duplication. NCIC and NamUs assign user access differently, with NCIC assigning access at the agency level, while NamUs provides access directly to individuals. Because of this, for the purposes of comparing NCIC and NamUs users, we consolidated information from NamUs for non-public users into their relevant agencies so as not to overstate the number of NamUs users as compared to NCIC. However, there are some limitations associated with this effort. For example, for a city-wide LEA such as the New York City Police Department, NCIC assigns Originating Agency Identifiers (ORI) numbers to each office within that particular agency, as the ORI number is used to indicate the LEA office directly responsible for a given NCIC record entry. When individuals register for NamUs, they may or may not provide the same level of detail regarding their specific office within a greater LEA, which means we may count an agency once for NamUs, even though that agency likely has multiple ORIs associated with it for NCIC. Further, because of the way user permissions are determined in NamUs, some LEAs with DNA or forensic specialists may also be included in the medicolegal investigator category, whereas they are likely to use only a single LEA ORI in NCIC. To address these limitations, this report presents information about both the number and type of individual users registered with NamUs, as well as the number and type of agencies that these users represent. To corroborate information above, and to obtain more in-depth perspectives about the extent to which opportunities exist to improve the collection and use of missing and unidentified persons information, we conducted interviews. Specifically, we interviewed Department of Justice (DOJ) officials, relevant stakeholders from selected states, and officials from nongovernmental agencies, in part to learn about past and current efforts to share information between NCIC and NamUs. In addition, we selected Arizona, California, and New York to include in this review, based in part on their respective state laws and policies associated with missing and unidentified persons, as well as the number of cases reported to each database for fiscal year 2015. Specifically, after identifying the 10 states that reported the highest number of cases to both NCIC and NamUs, we then compared four characteristics of state laws and policies related to reporting missing and unidentified persons. These included whether the state law specified (1) required reporting to NCIC, NamUs, or other federal databases; (2) reporting requirements for specific populations; (3) a timeframe for reporting missing persons cases; and (4) a timeframe for reporting unidentified remains. We chose Arizona, California, and New York to provide illustrative examples of different types of state laws. Table 1 provides a high-level comparison of the reporting laws for each state we reviewed. We then selected a nongeneralizeable sample of relevant stakeholders from each state to interview. Specifically, we interviewed relevant stakeholders in 3 state criminal justice agencies, 4 state and local LEAs, 2 medical examiner offices, and 1 coroner office. Although the views expressed from these interviews cannot be generalized to each state, they provide valuable insights about the types of experiences different stakeholder groups experience in states with varied reporting requirements. We also reviewed state documents associated with the data systems used by each state to report missing and unidentified persons information to NCIC. We conducted this performance audit from September 2015 to April 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comparison of Fragmentation and Overlap in Key Characteristics of the National Crime Information Center (NCIC) and National Missing and Unidentified Persons System (NamUs) Purpose Both systems contain data designed to be used to solve long- term missing and unidentified persons cases. Registered Users Registered users of both systems must populate one system with missing and unidentified persons cases and then go through the process again to enter the same data in the second system. To utilize information from either system, registered users must go through an inefficient process of accessing and searching each system separately, and then manually comparing results. Data Validation Efforts NamUs Regional System Administrators (RSA) check NCIC as part of the NamUs validation process. In fiscal year 2015, RSAs requested and manually reviewed NCIC records for at least 22,000 missing persons and 4,532 unidentified persons cases. NCIC contains significantly more missing persons cases than NamUs, while NamUs contains more unidentified persons cases, limiting the usefulness of either system. Specifically, in fiscal year 2015, 3,170 missing persons cases were reported to NamUs, while 84,401 long-term cases were reported to NCIC during the same time period. In contrast, 1,205 unidentified persons cases were reported to NamUs in fiscal year 2015, while 830 cases were reported to NCIC. Less than 0.1 percent of registered NCIC users are medical examiner or coroner offices, while approximately 18 percent of the agencies with at least one registered NamUs user are considered part of the medicolegal field. Additionally, many missing persons cases are initially reported in NamUs by members of the public who do not have access to NCIC. Consequently, potentially valuable information on missing persons cases may not be getting to all those who need it. Diana C. Maurer, (202) 512-9627 or maurerd@gao.gov. In addition to the contact named above, Dawn Locke (Assistant Director), Elizabeth Kowalewski, Susanna Kuebler, Amanda Miller, Jan Montgomery, Heidi Nielson, Janay Sam, Monica Savoy, and Michelle Serfass made key contributions to this report. | Every year, more than 600,000 people are reported missing, and hundreds of human remains go unidentified. Two primary federal databases supported by DOJ—NCIC and NamUs—contain data related to missing and unidentified persons to help solve these cases. NCIC contains criminal justice information accessed by authorized agencies to assist with daily investigations. NamUs information can be used by law enforcement, medical examiners, coroners, and the general public to help with long-term missing and unidentified persons cases. Senate Report 113-181 (accompanying the Consolidated and Further Continuing Appropriations Act of 2015) includes a provision for GAO to review NCIC and NamUs. This report describes the access to and use of missing and unidentified persons information contained in NCIC and NamUs, and the extent to which there are opportunities to improve the use of this information. GAO reviewed NCIC and NamUs data, and relevant state and federal statutes. GAO also conducted nongeneralizeable interviews with stakeholders in three states, selected in part on state laws. The Federal Bureau of Investigation's (FBI) National Crime Information Center (NCIC) database includes criminal justice agency information and access to such data is restricted to authorized users. In contrast, the Department of Justice's (DOJ) National Institute of Justice (NIJ) funds and oversees the National Missing and Unidentified Persons System (NamUs), a database for which the public may register to access published case information. Because many users of NamUs are not authorized to access NCIC, there are no direct links between the systems. As a result, while both NCIC and NamUs contain information on long-term missing and unidentified persons, they remain separate systems. DOJ could facilitate more efficient sharing of information on missing persons and unidentified remains (referred to as missing and unidentified persons cases) contained in these systems. GAO found, in part, that the following three key characteristics of NCIC and NamUs are fragmented or overlapping, creating the risk of duplication. Database Records: NCIC and NamUs contain fragmented information associated with long-term missing and unidentified persons (cases open for more than 30 days). For example, in fiscal year 2015, 3,170 long-term missing persons cases were reported to NamUs while 84,401 missing persons records reported to NCIC became long-term cases. NamUs also accepts and maintains records of missing and unidentified persons cases that may not be found in NCIC because, for example, they have not yet been filed with law enforcement. As a result, users relying on only one system may miss information that could be instrumental in solving these types of cases. Registered Users: The NCIC user base is significantly larger than the NamUs user base, and the types of users vary, which may contribute to the discrepancies in each system's data. For instance, almost all law enforcement agencies use NCIC, with only a small fraction registered to use NamUs. Additionally, members of the public do not have access to NCIC, but can report missing persons cases to NamUs. Data Validation Efforts: In part to minimize fragmentation, NamUs uses a case validation process and other ad hoc efforts to help ensure that data on missing and unidentified persons contained in NCIC is captured by NamUs. However, these processes introduce additional inefficiencies because they require officials to manually review and enter case data into both systems, resulting in duplicative data entry. Inefficiencies exist in the use of information on missing and unidentified persons primarily because there is no mechanism to share information between the systems, such as a notifier to inform NCIC users if related case data were present in NamUs. According to FBI officials, federal law precludes full integration of NCIC and NamUs; however, opportunities to share information may exist within the legal framework to address fragmentation and overlap without full system integration. By evaluating the technical and legal feasibility of options to share information, documenting the results, and implementing feasible options, DOJ could better inform those who are helping solve missing and unidentified persons cases and increase the efficiency of solving such cases. To allow for more efficient use of missing and unidentified persons information, GAO recommends that DOJ evaluate options to share information between NCIC and NamUs. DOJ disagreed because it believes it lacks the necessary legal authority. GAO believes DOJ can study options for sharing information within the confines of its legal framework, and therefore believes the recommendation remains valid. |
IRC section 501(c) specifies 28 types of entities that are eligible for tax- exempt status and over 1.6 million entities have been recognized as exempt as of 2005. One subset of these tax-exempt entities is classified as 501(c)(3) charitable organizations, of which slightly over 1 million existed in 2005, according to IRS. In 1969, Congress directed that all 501(c)(3) organizations would be private foundations unless they qualify for exclusion from that status under IRC section 509. This change subdivided section 501(c)(3) organizations into two general categories—”public charities” and “private foundations.” Within the public charities classification, Congress created supporting organizations, which are defined in section 509(a)(3) as public charities organized to support one or more public charities, including churches and certain governmental units, and certain other tax-exempt entities, such as membership-based organizations (e.g., unions and professional organizations). Supporting organizations are classified as public charities not because they are themselves publicly supported, but because they are to support another public charity with which they are to maintain a strong relationship. In creating supporting organizations, Congress recognized that it can be beneficial and prudent to place certain assets or activities in a separate legal entity to insulate assets from liability or to facilitate separation of functions for programmatic, accounting, or other reasons, according to the Panel on the Nonprofit Sector Final Report. Donor-advised funds are generally separate accounts operated by tax- exempt public charities to receive contributions from a single donor or group of donors. Donors can advise on the distributions from the account. For the contribution to qualify as a completed gift, the charity must have ultimate control over how the assets in the account are invested and distributed. According to our interviews with knowledgeable individuals and recent Senate testimony, donor-advised funds have generally been in existence since the 1930s and have traditionally been operated by community foundations. In the 1990s, financial investment firms began establishing “commercial funds,” which are tax-exempt public charities that operate donor-advised fund accounts. Investment of contributions to the fund accounts is controlled by the commercial fund’s board, which hires the investment firm that established the commercial fund to manage the fund’s assets. Generally, an entity must apply to IRS to obtain tax-exempt recognition. Most organizations seeking recognition from federal income tax must use specific forms, including Form 1023 (Application for Recognition of Exemption under Section 501(c)(3) of the IRC) or Form 1024 (Application for Recognition of Exemption under Section 501(a)) as well as other documentation. After receiving tax-exempt recognition, public charitable entities must annually file a Form 990 information return to report their financial transactions and activities for a tax year. Charities that have less than $100,000 in gross receipts and $250,000 in year-end assets may use Form 990-EZ. Entities with gross receipts below $25,000, and certain types of entities, such as churches and certain entities associated with churches, generally are not required to file. Form 990 collects information on revenues, expenses, and assets, and has accompanying schedules. Schedule A of Form 990 covers several areas such as compensation, lobbying expenditures, and revenue sources. Schedule B covers the source of contributions to charities and certain other exempt entities. Congress has granted public access to Form 990 data in recognition of the importance of public oversight to inform donors about how their money is spent and to stem potential abuses. Private foundations, regardless of their amounts of gross receipts or assets, are required file a Form 990-PF information return annually. IRS oversight of tax-exempt entities generally relies on two activities. First, IRS reviews applications for tax-exempt status to determine whether a tax-exempt purpose is envisioned. IRS approves those applications that are properly completed and for which the applicant can demonstrate to the satisfaction of IRS that its activities or proposed activities meet the requirements of the section under which exemption is claimed. Second, IRS annually examines selected Forms 990 to determine whether the exempt entities meet various requirements (such as properly reporting unrelated business income tax). In general, IRS attempts to select entities that it believes are likely to have violated requirements. Based on examination evidence, IRS can accept the Form 990 as filed or change the status of the entity, impose excise taxes for certain types of violations, or revoke the exempt status if the violations are serious enough. As appropriate, IRS can also assess other types of taxes, such as employment taxes or unrelated business income taxes. In 2004, the Senate Committee on Finance asked a panel of experts to make recommendations to Congress to improve oversight, transparency, and governance in the tax-exempt sector. To do so, the Independent Sector convened a Panel on the Nonprofit Sector in October 2004, which included 24 nonprofit and philanthropic leaders. The Panel issued a final report in June 2005 with over 120 recommendations, several focusing on donor-advised funds and supporting organizations. On the basis of this report and other information, Congress has considered proposals to impose more restrictions and requirements on donor-advised funds and supporting organizations to better ensure that their contributions advance charitable rather than private interests and that their donors do not exert control or receive private benefits. Provisions in legislative proposals that apply to donor-advised funds have included providing a formal definition of a fund, setting minimum payout requirements, and placing restrictions on dealings with those who may privately benefit from charitable activities. Provisions related to supporting organizations have included those that would apply certain private foundation rules and restrictions, such as those on the annual payout requirement and excess business holding rules. To compare the federal laws and regulations on donor-advised funds and supporting organizations with those for private foundations, we reviewed the IRC, Treasury regulations, IRS publications, and various other documents describing these laws and regulations. We also interviewed 18 IRS staff and 16 individuals knowledgeable about the tax-exempt community, such as attorneys and governmental-affairs managers at tax- exempt entities, to obtain their input about these laws and regulations and our comparison of them. To determine financial and organizational characteristics of donor-advised funds, supporting organizations, and other tax-exempt charitable organizations, we obtained and analyzed IRS Form 990 and Form 990-PF data, as well as reviewed survey data on donor-advised funds that were collected by The Chronicle of Philanthropy. We used the surveys to obtain data on donor-advised funds because this information was not identifiable on the Form 990. To determine the reliability of the donor-advised fund data, we interviewed The Chronicle of Philanthropy staff about their survey methodology. To obtain supporting organization and other tax- exempt charitable organization data fields, we obtained data from IRS’s Returns Inventory and Classification System (RICS) for tax years 1999 through 2003, the 5 most recent years of data available at the time of our analysis. Because not all the data fields we wanted were available from RICS, we obtained additional Form 990 data fields from GuideStar, an organization that electronically captures Form 990 data for public access. To assess the reliability of the RICS and GuideStar data, we interviewed agency officials and conducted electronic data testing. In addition, we reviewed a selection of Forms 990 submitted to IRS to confirm that the values on the form matched those in the database. While we identified some minor discrepancies, we determined that the Form 990 data were sufficiently reliable for our purposes. The data files we obtained included the population of tax-exempt charities filing returns for those years, including supporting organizations and private foundations. Using computer software to analyze these data files, we determined summary statistics and converted dollar amounts to 2005 constant dollars. For our discussion on “payout” rate, compensation, and Form 990 revisions, we performed literature searches and interviewed 20 knowledgeable individuals from IRS’s Statistics of Income (SOI) program and Tax-Exempt & Government Entities (TE/GE) division, Urban Institute, and Congressional Research Service (CRS). To describe the types of noncompliance and promotion methods involving donor-advised funds and supporting organizations, we reviewed IRS summaries of examination cases. To obtain anecdotal information about noncompliance involving donor-advised funds and supporting organizations, we also interviewed 4 managers at IRS who oversee examinations of donor-advised funds and supporting organizations and 7 individuals knowledgeable about the tax-exempt community who work at organizations such as the Council on Foundations and the Independent Sector. We also interviewed 6 financial professionals and 11 community foundation managers on how donor-advised funds and supporting organizations are promoted to clients for abusive transactions. We also reviewed an IRS research report on developing abusive promoter leads through searching the Internet. To provide additional information on noncash contribution valuation methods (see app. III), we reviewed IRS publications and forms and interviewed an IRS field specialist working on valuation issues in the Large and Mid-Sized Business operating division. To obtain information on the marketing of donor-advised funds and supporting organizations (see app. IV), we spoke with 11 community foundation managers, 6 financial professionals, and 18 managers at IRS. The examples we discuss come from materials that we were referred to or located online based on our interviews, and do not necessarily represent all materials and methods used to market donor-advised funds and supporting organizations. In recent years, donor-advised funds have become popular charitable- giving vehicles, and the number of supporting organizations has also continued to increase. At the same time, federal tax law generally imposes fewer restrictions and requirements on donor-advised funds and supporting organizations, but provides them and their donors less control over the use and investment of the charitable assets compared to private foundations; in fact, section 501(c)(3) and federal regulations do not specifically mention donor-advised funds. As a general principle, the more control that a donor has over the use of the charitable contributions and assets, the more regulations and restrictions apply. Table 1 discusses how federal tax law views donor- advised funds and supporting organizations compared to private foundations across a number of variables. Among the three types of charitable-giving vehicles, donor-advised funds allow donors to create a long-term vehicle for supporting charities with relatively less administrative burden because the fund is managed by a third party. Furthermore, donor-advised funds are not required to file separate tax returns, file for tax-exempt status, or adhere to private foundation rules. The donor can make a gift and take an income tax deduction for that tax year, and at that time or later, advise which charities should receive the distribution. However, in doing so, the donor gives up control over the distribution of the gift to charities. Supporting organizations are public charities that are to support one or more public charities or certain other tax-exempt organizations. They fall in between a donor-advised fund and a private foundation in terms of restrictions and sanctions versus donor control over the use of the charitable assets. For example, donors who create a supporting organization avoid private foundation excise taxes and other rules and face fewer restrictions on the deductibility of their donations at the expense of having less control compared to donors at a private foundation, such as involvement on the board. The level of control that the supported charity has over the supporting organization varies by the three basic types of supporting organizations. Type I supporting organizations are “operated, supervised, or controlled by” the supported charitable organization. Type II supporting organizations are “supervised or controlled in connection with” the supported organization. In contrast, Type III supporting organizations only are “operated in connection with” the supported organization (see fig. 2). In reforming the rules for charitable organizations in 1969, Congress made changes to restrict and regulate private foundations more than public charities. Private foundations are generally funded and controlled by a single or small number of donors and therefore may be prone to potential abuses, particularly by disqualified persons. As a result, private foundations are subject to anti-abuse rules and related sanctions that are not applicable to donor-advised funds, supporting organizations, and public charities as a whole. For example, public charities, including donor- advised fund operators and supporting organizations, are subject to restrictions and two related excise taxes for activities involving political expenditures (section 4955) and excess benefit transactions (section 4958). In contrast, private foundations are subject to six excise taxes for activities involving investment income (section 4940); self-dealing (section 4941); failure to distribute income (section 4942); excess business holdings (section 4943); investments that jeopardize the charitable purpose (section 4944); and certain “taxable expenditures” (section 4945). Although public charities, such as donor-advised fund operators and supporting organizations, and private foundations are subject to different restrictions on transactions with disqualified persons, both excess benefit and self-dealing restrictions are intended to prevent inurement or undue private benefit, which are prohibited for all section 501(c)3 organizations. Inurement is the transfer or use of the charity’s assets or income to or for the benefit of a charity’s insiders. All transactions that more than incidentally benefit insiders, other than reasonable compensation and arm’s length transactions, are prohibited inurement transactions. Private benefit is a broader concept, and may involve a transfer or use of a charity’s assets or income by private persons who are not necessarily insiders. Some private benefit may be allowed, but if present, must be no more than incidental to the exempt purpose being served. Unlike with donor-advised funds and supporting organizations, a private foundation is required under section 4942 to distribute annually a minimum amount of its funds, equal to approximately 5 percent of the fair market value of the foundation’s noncharitable use of assets (generally, stocks and other investments that compose the foundation’s endowment). In 1984, Congress passed legislation that clarified what expenses can be included towards meeting this minimum “payout” requirement. If this “payout” rate is unmet, the foundation is subject to paying taxes on the undistributed amount. Donor-advised funds hold billions of dollars in assets, and supporting organizations and private foundations hold hundreds of billions of dollars in assets. Financial data on donor-advised funds are not separately identified and reported on the Form 990. Although some data on donor- advised funds have been collected through an annual survey, these data are incomplete and not statistically representative of the fund population. Using 2003 data from Forms 990 and 990-PF, we found differences between supporting organizations and private foundations. For instance, in 2003, private foundations tended to report more total assets and contributions received but fewer revenues and expenses compared to supporting organizations. However, certain other characteristics of supporting organizations cannot be reliably determined from the Form 990 because this information is either not required to be reported or may be misreported for various reasons, according to IRS. Specifically, supporting organizations are not required to report a payout rate or to pay out a minimum amount of funds to charities, as private foundations must do. IRS has recently revised the Form 990 to better identify supporting organizations and donor-advised funds and is considering additional revisions, but plans to further revise the Form 990 are still preliminary. Data on donor-advised funds are limited because, unlike supporting organizations and private foundations, the funds usually are not entities that file a Form 990 to report their activities. Organizations that maintain donor-advised funds are to file a Form 990 that includes the assets and other aggregate information for all activities, including for donor-advised funds, but data on these funds are not readily identified from the form because these data are not separately reported. To provide more information about donor-advised funds, The Chronicle of Philanthropy has been conducting an annual survey of organizations that maintain donor-advised funds. Started in 2000, the survey focuses on the largest donor-advised funds and collects data such as the total assets held and the amount of grants awarded. For 2003, The Chronicle of Philanthropy reported that the 90 organizations participating in its survey held over $11.9 billion in assets and distributed over $2.2 billion to charities from their donor-advised fund accounts. However, these survey results, which are one of the few data sources available for donor-advised funds, do not represent the entire population of donor-advised funds and also have other data limitations. The survey does not try to capture information for all donor-advised funds, as the population of donor-advised funds to be surveyed is unknown, and focuses on the largest funds, such as the 50 largest community foundations, by amount of money raised. Also, while some efforts are made to generate a high response rate and to check unusual responses, the survey response rate has ranged between 53 percent to 57 percent. Further, survey respondents vary from year to year, and the data are self-reported and cannot be checked for accuracy. Finally, the survey does not collect data for individual donor-advised fund accounts. From our analysis of Forms 990 and 990-PF, we found that supporting organizations filed nearly 21,400 Forms 990, and private foundations filed over 80,300 Forms 990-PF for tax year 2003. Table 2 summarizes differences in the amounts of assets, revenues, expenses, and contributions received when comparing 1999 and 2003. Appendix II provides additional related data, including data for the years 1999 through 2003. Table 2 shows that in 2003, the number of private foundations outnumbered the number of supporting organizations by more than a factor of 3, reported over $200 billion more in assets, and reported more contributions received. However, supporting organizations reported more revenue but also more expenses by 2003 compared to private foundations. Furthermore, comparing 1999 to 2003, supporting organizations tended to report growth in all of these areas while private foundations reported declines in revenue and contributions received. We were unable to determine the reasons for these changes, but the year-to-year variations during 2000, 2001, and 2002, in part due to a significant stock market decline during this time, provided some insights (see app. II for summary tables with annual data). Median values for the dollar amounts reported are shown in table 3. For the four financial characteristics listed in table 3, median values for supporting organizations were much higher compared to private foundations in both 1999 and 2003, in contrast to the higher total values for private foundations listed in table 2. Also, the declines in supporting organization median values between 1999 and 2003 were much less compared to private foundations. We excluded zero values from our median analyses. IRS officials said that organizations might be reporting zero values if filing a final return or for other reasons. However, we were unable to conduct additional analysis on these zero values, particularly for total contributions received in which over 50 percent of the values reported by supporting organizations and private foundations were zero. Some financial characteristics of supporting organizations cannot be reliably determined because they are not required to be reported on the Form 990 or may be misreported. As a result, directly comparing supporting organizations and private foundations or other tax-exempt charitable organizations can pose challenges. Being able to make these comparisons is important in order to address concerns, such as how much and how often supporting organizations pay out to charities, since, like private foundations, supporting organizations can be used to accumulate contributions prior to distributing the money to charity, but, unlike private foundations, they do not have a minimum payout requirement to support charities that must be annually reported. Because supporting organizations do not have this payout requirement, they do not explicitly report a payout rate, as is required for private foundations. Certain lines on the Form 990-PF allow IRS, and the public, to determine whether private foundations have met their required payout rate. For supporting organizations, factors that are included in the payout calculation for private foundations might not be readily determined from the Form 990. Absent being able to identify these additional data and clarifying how they are to be accounted for in a supporting organization payout rate, consistently comparing supporting organizations’ and private foundations’ payout rates cannot be done. Similarly, for donor-advised funds, payout rate has not been statutorily required or defined and consequently is also not required to be reported on the Form 990, and available data do not allow a payout rate to be determined. Despite these difficulties, researchers have studied different ways to compute a payout rate for supporting organizations. A 2005 Urban Institute study found that supporting organization payout rates could vary due to factors such as the purpose of the organization and which lines on the Form 990 were included in determining how much support was provided. The study pointed out that differences in supporting organization payout rates may reflect differences in the purpose and operation of the supporting organizations, rather than the amount of charitable support provided. For example, some supporting organizations provide operational services to their supported charities, rather than provide grants. Supporting organizations can serve to pool or manage investments or endowments for their supported organization, hold real estate, or provide services, such as office or property management. Payout rates for these types of supporting organizations might indeed be low or infrequent, since these organizations do not hold and distribute charitable funds like other supporting organizations or private foundations whose primary purpose is grant-making. While the Form 990 includes a supporting organization’s grants and net assets, using only those lines to determine a payout rate may provide an incomplete picture of the supporting organization’s charitable activity. In 2002, supporting organizations reported over $7 billion in grants as transfers of charitable support. However, in the Urban Institute study, researchers found that transfers of support from a supporting organization to its supported organizations were reported on 1 or more of at least 10 lines on the Form 990. While the amounts reported on these lines might include transfers of support, the Form 990 line data alone are generally not enough to determine how much of the amount reported, if any, supports charities. For example, they found that organizations they sampled sometimes reported transfers of support to a supported organization on the line for rental expenses. However, only by examining Form 990-related documentation, which an Urban Institute researcher said required considerable effort, could they determine this result. In 2003, supporting organizations reported over $431 million on this Form 990 line, but without significant effort, one cannot determine how much, if any, of this amount consisted of transfers of support to supported organizations. Another challenge in using Form 990 data to determine financial characteristics arises when analyzing compensation paid to executives and employees of tax-exempt organizations, such as supporting organizations. In 1999 and 2003, supporting organizations reported over $894 million and over $1 billion, respectively, in total executive compensation. Private foundations reported almost $739 million in 1999 and about $812 million in 2003 in total executive compensation (see app. II for data tables). Organizations are required to report compensation for certain employees on the Form 990 and Schedule A. However, according to IRS managers, misreporting is not uncommon, although some may be unintentional, in such areas as deferred executive compensation, payments made to relatives, and compensation paid from related entities, such as a for-profit subsidiary of a tax-exempt organization paying the salary of an employee or board member of its parent tax-exempt organization. In addition, an IRS researcher had concerns that compensation could be overreported for tax- exempt organizations within a network, such as a health care network of hospitals. In such networks, which commonly include supporting organizations, compensation for board members can be misreported on the Forms 990 when related organizations have common board members. IRS is currently working on an initiative to identify and stop abuses by public charities and private foundations that pay excessive compensation and benefits to their officers and other insiders. Beginning in late 2004, IRS contacted a broad spectrum of over 1,800 public charities and private foundations seeking information about their compensation practices and procedures. IRS also just started a new phase of the initiative, involving an additional 250 contacts about loans to officers, directors, and key employees. The goals for the initiative are to learn how exempt organizations determine and manage compensation; gauge the existence and effectiveness of exempt organizations’ controls over compensation issues; learn how exempt organizations report compensation on Forms 990 and 990-PF; address instances of questionable compensation practices, as well as compensation of specific individuals; and increase exempt organizations’ awareness of compensation-related tax issues. The initial results of the compensation initiative will be included in a report that is expected to be completed in late August or September 2006. All examinations are expected to be completed by or during 2007. In addition to financial characteristics such as payout rate and executive compensation, organizational characteristics about supporting organizations are difficult to determine from the Form 990. For example, Form 990 does not collect the EINs of their supported organizations, which according to IRS officials, would facilitate IRS’s ability to track the flow of donations. In addition, an IRS manager said that having supported organizations’ EINs would facilitate IRS’s ability to track how compensation is treated between supporting organizations and supported organizations. IRS emphasized that any form changes must be balanced against the increased burden on taxpayers of supplying additional information. Other organizational characteristics for which IRS collects limited data on Form 990 include relationships with foreign entities, noncash contributions, loan recipients, and donor information. We were unable to closely evaluate these characteristics because IRS had limited data and information to provide and because of time constraints. Although the costs and burdens of collecting additional data to determine these organizational characteristics and protecting taxpayer privacy are legitimate concerns, IRS has acknowledged the need for greater transparency and better data to track the flow of funds between donors and charities. For example, IRS does not have TINs of loan recipients to track the flow of funds. IRS has begun to take steps to help address the lack of information reported on donor-advised funds and supporting organizations. For example, IRS has revised the 2005 Form 990 Schedule A to include a check box to indicate whether a supporting organization is Type I, II, or III. This information will be transcribed into IRS’s electronic databases beginning in 2007, which, according to IRS, would allow it to better focus its examination and educational resources on compliance issues particular to each type. Also, starting with the 2003 Form 990 Schedule A, organizations must indicate whether they maintain separate accounts for donors, such as donor-advised funds. In January 2006, IRS began transcribing this information, which is a first step towards identifying how many and which charities have donor-advised funds. However, these organizations are not required to separately report data on the donor-advised funds from the other activity reported on the Form 990, meaning that data on the funds are not easily identified. While IRS is considering revising the Form 990 to include more information about donor-advised funds, it does not have details on what data they might collect or how or when they would revise the form. IRS is considering additional changes to the Form 990 that, pending management approval, would include reorganizing the form in stages. A pending proposal includes recommendations to create new sections or schedules on the Form 990 with questions on donor-advised funds and supporting organizations. Because the Form 1023 asks questions regarding donor-advised funds and supporting organizations, the proposal recommends aligning the Form 990 with Form 1023 so that IRS can track a charity from its formation. If the recommendation is approved, IRS’s Form 990 Redesign Team plans to rewrite the Form 990 instructions and add a glossary consistent with the Form 1023 which, according to IRS, may provide better data. According to IRS staff and others we interviewed, these form revisions, along with increased use of electronic filing, could improve the quality of data available to IRS to better identify noncompliance through its research and compliance efforts, as well as to the public to improve the effectiveness of tax-exempt charitable organizations. IRS program managers report that some donor-advised funds and supporting organizations cases highlight concerns about private benefit, inurement, and donor control. Some of these cases demonstrate clear noncompliance, allowing IRS to propose appropriate corrective actions. However, IRS is confronted with many cases that require detailed assessments of evidence, which makes addressing noncompliance challenging. Additionally, IRS contends with activities involving donor- advised funds and supporting organizations that do not violate laws or regulations, yet do not seem to benefit charities. Entities or individuals, such as financial advisers or attorneys, sometimes facilitate abusive schemes, introducing additional complexities to IRS’s examination process. Private benefit, inurement, and donor control are common concerns for IRS in examinations of potential noncompliance involving donor-advised funds and supporting organizations. IRS is unable to provide estimates about the prevalence of this noncompliance, and noncompliance in general. Thus, the examples presented are intended to illustrate known cases of private benefit and donor control, and do not represent the entire range of noncompliance. Private benefit occurs when a 501(c)(3) organization is not operated or organized exclusively for exempt purposes because it serves a private rather than public interest. Because they are subject to section 501(c)(3), both donor-advised funds and supporting organizations must avoid private benefit that is more than incidental to the charitable purpose being served; if private benefit is substantial enough, it may jeopardize an organization’s tax-exempt status. If the organization’s assets or income are transferred to an individual who is a charity insider, the benefit is called “inurement.” Private benefit and inurement schemes involving donor-advised funds and supporting organizations may benefit various individuals and may vary in complexity. IRS has encountered multiple cases of private benefit where donors to donor-advised funds are able to regain some or all of their contribution. For example, IRS has concerns about one fund offering a “loan program,” where donors were able to repossess their donation, with no obligation for repayment. IRS also sees inurement cases, in which individuals other than the donor receive private benefit. For example, IRS is examining one exempt organization and donor-advised fund operated by a for-profit company. The company offered the fund as a charitable giving vehicle for its employees. The exempt organization lacked an independent board, with the president–who also served as president of the for-profit company–receiving potentially high commissions and fees from contracts with the donor-advised fund. While donor-advised fund schemes often involve private benefit, schemes involving supporting organizations more often result in inurement and are typically more complex, according to IRS management. Schemes can involve direct payment of benefits to donors or, more indirectly, payments routed through offshore entities. One direct payment scheme, designed to benefit a donor’s children, funneled school tuition payments through a supporting organization intended to support their child’s school. More complex schemes enable the donor to regain his or her donation after it is routed offshore. One typical scheme begins with a donation to a supporting organization, which is then transferred to an account in an offshore investment firm controlled by a financial planner, accountant, or other knowledgeable insider working with the donor. The money is then transferred to a domestic mortgage lender, also controlled by the insider, giving the donor access to the money for use toward an interest-only mortgage. As a result, the donor benefits from a tax deduction on his or her contribution, while still retaining access to the donation. To justify the scheme, the supporting organization claims that earnings from their investment in the offshore firm will benefit charity. Donor control arises when a donor holds authority that exceeds what is permissible for donor-advised funds or supporting organizations. Illegal control can occur when a donor or disqualified person has control over the charity’s assets, operations, or governance, or the organizations receiving support. It is possible for donor control to occur without private benefit. A donor may control a function or operation of a supporting organization or donor-advised fund without receiving benefits, according to IRS management. Donor control involving donor-advised funds and supporting organizations manifests in different ways. Donor control of a donor-advised fund occurs when the donor oversteps his or her advisory role and retains ultimate authority over the distribution of fund assets. One IRS manager told us that, although more common in supporting organization cases, a donor-advised fund donor may also achieve control by controlling the exempt organization receiving the benefits of their donation. For example, IRS is pursuing a case where a donor-advised fund appears to be making distributions to a public charity, which is controlled by the donor-advised fund’s donor. If the donor- advised fund did not exist, the public charity recipient would likely be classified as a private foundation. IRS is investigating whether the charity has other support sources. For supporting organizations, control of the organization’s board or the donor’s ability to designate charitable recipients can constitute donor control. Board control can occur directly by controlling more than 50 percent of board voting power or veto power granted to disqualified persons. Alternatively, board control can occur indirectly through a disqualified person influencing board members who are not disqualified persons, according to IRS managers. Retaining access to assets can also signify direct or indirect control of a supporting organization. In one case, IRS has questioned whether or not a donor controlled the operations and investments of the supporting organization that the donor founded, although the donor did not receive private benefit. Donor control can also occur indirectly through control of an asset donated to the supporting organization. For example, in one case, IRS is concerned that a donor is continuing to collect and retain rent from building tenants after the building was donated to a supporting organization. Although private benefit, inurement, and donor control are reoccurring themes in IRS’s caseload, other types of noncompliance involving donor- advised funds and supporting organizations can occur. Specifically, a supporting organization could fail to maintain a relationship with its supported organization(s). A representative from the tax-exempt community told us of situations where charities listed as supported organizations were unaware of a purported relationship with a supporting organization. The Panel on the Nonprofit Sector also recognized this problem in its June 2005 report. Similarly, IRS managers told us that a major issue in supporting organization examinations is whether or not the organization maintains a sufficient relationship with its supported organization. Form 990 only requires that supporting organizations report the name of their supported organizations; it does not require them to report the EIN of the supported organization. IRS managers told us that not knowing the EIN makes it harder for IRS staff to track the relationship between the two organizations. IRS uses resources from a variety of units to identify and examine noncompliance involving donor-advised funds and supporting organizations. Toward these ends, IRS created two teams, one on donor- advised funds and one on supporting organizations. As of June 2006, the donor-advised fund team had opened but had not yet closed 27 examinations, according to an IRS manager. As of June 2006, the supporting organization team had opened 102 examinations and closed 20 of them; 18 of which were found to be noncompliant, according to IRS. IRS managers also told us that other programs–including the Tax Examination Program and the Excessive Compensation Program–have also examined and closed supporting organization cases, and are currently examining 655 supporting organizations. Regardless of the type of noncompliance found, IRS can propose corrective actions when the evidence shows that a law or regulation has been unmistakably violated. IRS is developing criteria for proposing corrective actions for donor-advised funds as the related team finishes its examinations; many of the examinations are in the early stages. For supporting organization cases, IRS officials said, in general, they will propose a change to private foundation status for issues of donor control. Intermediate sanctions or revocation of the tax-exempt status are typically proposed for inurement cases, according to IRS. Criminal charges may be brought upon individuals found to be exhibiting criminal behavior while participating in abusive schemes, and may occur in conjunction with corrective actions resulting from examinations. In cases where the donor- advised fund or supporting organization is believed to be beneficial overall but needs correction in order to be fully compliant, IRS managers told us they may also initiate a closing agreement, which provides a set of requirements intended to correct flaws in the donor-advised fund or supporting organization structure or operations. For various reasons, IRS does not know the overall rate of noncompliance or the prevalence of different forms of noncompliance involving donor- advised funds and supporting organizations. First, IRS did not use a random sample to identify cases for examination. Instead, it used methods that led to examining the most egregious noncompliance schemes. For example, the manager for the donor-advised fund team told us it selected cases for examination based on large asset size or other unusual characteristics, such as high compensation or high fees. Supporting organizations cases were selected based on referrals from other IRS units, according to the team’s manager. Second, IRS has no established population of donor-advised funds for which to estimate a noncompliance rate. An IRS manager said IRS is unable to identify the population because exempt organizations have not been required to report their use of donor- advised funds, which prevents IRS from employing statistical sampling methodology to estimate donor-advised fund noncompliance. Third, examinations by IRS’s teams are relatively new; examinations began in 2005 for donor-advised funds and began in 2004 for supporting organizations, according to IRS managers. Not all cases involving donor-advised funds and supporting organizations are clear; IRS faces challenges in identifying and examining potential noncompliance. In part, these challenges are due to uncertainty about whether the evidence unequivocally points to noncompliance, and to the difficulty in exhaustively collecting evidence on the facts and circumstances of a case. To evaluate facts and circumstances, IRS managers said that agents may evaluate minutes of meetings, correspondence among trustees, contracts or agreements on loans or rent, news articles, or the organization’s trust document. Although exempt organizations must maintain documentation that they operate exclusively for exempt purposes, the existence and quality of these documents may differ among organizations, according to IRS managers. Therefore, IRS may need to collect evidence that is time- or resource-intensive to uncover. Evidence that does not readily exist or that is difficult to uncover, combined with the practical limits of the examination process, make some noncompliance nearly impossible to detect, as the following examples illustrate. In determining influence on or control of a board, regulations define permissible relationships between disqualified persons and supporting organization boards. Despite regulatory guidance, IRS is unable to identify all noncompliant situations because it cannot always identify influence on board members by disqualified persons, especially when attempting to identify a disqualified person’s indirect influence. Nomination of a majority of board members by a disqualified person may signify this influence, but IRS cannot consistently track the origination of a board nomination. Only in some cases are trust documents and meeting minutes available that may document the nomination process, according to IRS. Additionally, IRS may have difficulty identifying a disqualified person’s indirect influence on a board when this influence may occur in private conversations. It may also be challenging to find evidence that ensures that donor- advised funds are operating on “donor advice” rather than “donor control.” To establish that donors are not exercising undue control, IRS may examine the process by which a donor makes a funding recommendation, according to the manager of IRS’s donor-advised fund team. Specifically, IRS managers said this examination could include verification of an independent board, the process by which the fund operator investigates donor recommendations or provides documents that show that a donor’s recommendations are not all accepted. However, similar to the challenges of identifying board control, IRS may not be able to detect subtle coercion occurring in payout decisions. Detecting control of assets may also be difficult. For example, a donor may contribute a large portion of interest in a business partnership to a supporting organization. The donor, serving as the business’s general partner, retains some ownership of the partnership and has a management responsibility or controls voting stock. According to an IRS manager, unless the supporting organization has other assets, this situation would likely allow the donor to have effective control over the assets of the supporting organization. In some situations, the business may claim that the general partner lacks controlling power, in which case IRS managers said examiners must rely on available evidence, such as partnership agreements, to determine the donor/partner’s control over the business. Once again, evidence of more subtle control may not be available or practical for IRS to pursue. Not all cases involving donor-advised funds and supporting organizations are clear cases of private benefit, inurement, or donor control, or involve the challenges of gathering evidence. IRS managers said they encounter scenarios where no statute or regulation was violated, but where activities involving donor-advised funds or supporting organizations do not seem to benefit charity. In these situations, noncompliance cannot be alleged, but IRS may still question an organization’s or individual’s charitable purposes. A general lack of data as well as a lack of legal definitions and regulations for donor-advised funds contribute to these uncertainties for IRS, which have prompted both IRS and Congress to consider different solutions for reform, as the following examples illustrate. One IRS manager told us that IRS is uncertain about whether or not donor-advised funds with low payout rates are supporting charitable purposes. No laws or regulations require annual minimum payouts to charities from donor-advised funds, but according to IRS management, idle assets are unlikely to result in benefits. Conversely, a donor- advised fund may be idle in paying out to build an endowment. If a supporting organization has a low payout rate, however, IRS said this can sometimes signify that it is not fulfilling its requirement. Legislation has been introduced in Congress to impose a minimum payout on donor-advised funds and supporting organizations. As of early July 2006, legislation on this issue had not passed. IRS managers told us that examiners have discovered loans made from a supporting organization to a donor or insider. Loans made by public charities to officers, directors, donors, and others are legal, provided that they are repaid and not made at terms lower than the market rate. According to IRS, charities could justify these loans as an investment. However, these loans may carry risk or introduce a conflict of interest. For example, if a borrower has some form of control over the organization, such as that of a board member or executive, it is less likely that the organization will take legal action if the loan is not repaid. Also, loans may prevent assets from being paid out to charitable purposes. Furthermore, if a loan is made as part of an employee compensation package, in some cases it may be classified as an excess benefit under IRC section 4958, according to IRS management. Additionally, these loans may signify control by disqualified persons. Even if a loan’s interest rate is reasonable, or the borrower is not an employee or in control of the organization, the terms of the loan may give a borrower other benefits, thus making a case that the organization serves private rather than public purposes. In recognition of such potential improprieties, 19 states have banned such loans, according to The Chronicle of Philanthropy. As part of a broader study of executive compensation at public charities, IRS is examining loans made to insiders, but is not specifically focusing on supporting organizations. In addition to examining donor-advised funds, supporting organizations, and donors, IRS investigates the promoters—creators and facilitators of abusive schemes. Some abusive schemes are organized or participated in by professionals or entities who work in concert with the donor. Identifying and examining the roles of these professionals or entities can be difficult and therefore may exacerbate the challenges in examining donor-advised fund and supporting organizations cases. A promoter is an individual or entity that organizes or assists in the organization of a partnership, trust, investment plan, or any other arrangement to be sold to a third party and designed to be used or is actually used in obtaining illegal tax benefits. Accountants, financial planners, attorneys, community foundations, and tax preparers could serve as promoters, and may not just be involved in schemes involving exempt organizations. Cases involving promoters address both the material used to promote noncompliance, which must adhere to tax law, as well as the actual activities implementing a scheme. Because promoters may be committing fraud, promoters could face criminal charges. See appendix IV for a discussion of materials and methods for publicizing donor-advised funds and supporting organizations which are not intended to lead to abusive schemes. According to IRS managers, some schemes, particularly those benefiting high-income donors, originate with a financial planner, accountant, or lawyer. Other promoters may play a role in facilitating schemes, such as the mortgage inurement scheme previously described in this report. According to the manager of IRS’s donor-advised fund team, promoters are typically more involved in schemes involving supporting organizations than donor-advised funds due to the complexity of supporting organizations’ schemes. For some cases IRS is able to identify the promoter, noncompliant material, and transactions that promote noncompliance. For example, material from a financial planner offered a hypothetical estate plan proposing that a supporting organization hold a wealthy donor’s personal assets, thus facilitating a reduction in estate taxes upon the donor’s death. The plan proposed transferring land owned by the donor to the supporting organization, who would offer the sale of the land to the donor’s heirs at about 10 percent of its fair market value. Furthermore, the plan proposed that the supporting organization also lease the estate assets back to the donor’s business. If the plan were carried out, inurement, private benefit, excess benefit, and donor control would be significant legal concerns. However, according to IRS managers identifying and investigating promoters is often challenging. IRS managers said they rely on referrals and Internet searches to find promoters. Although some promoters advertise on the Internet, they may sometimes only share details about the promotion in conversations with a donor. IRS’s donor-advised fund and supporting organization teams have investigated nine promoters involved in potentially abusive schemes, according to IRS managers. In addition to the work of the issue teams, IRS’s civil Lead Development Center is tasked with identifying promoters and coordinating promoter investigations. IRS managers told us that once IRS identifies potential promoters, examiners must seek information that is typically carefully hidden among complex transactions involving multiple entities. This requires that IRS carefully craft document requests and summonses, which can be a lengthy process. Furthermore, once IRS refines its examination process to target certain schemes, promoters quickly alter their approaches. Finally, like some of the cases described earlier in this section, some marketing material may not violate a law or regulation, but may have a questionable purpose which may indicate potential noncompliance by misleading donors with incomplete information. This may occur when marketing material may be providing incomplete information on the limits of donor-advised funds and supporting organizations versus private foundations. We found examples of Web sites that describe a donor- advised fund or supporting organization as a giving option with all the benefits and advantages of a private foundation, which may mislead potential donors into believing they can retain control over their donation. Donor-advised funds, supporting organizations, and private foundations are vehicles for charitable giving. Donors can use these approaches for long-term giving or to accumulate assets to address some larger need. They also may create donor-advised funds or supporting organizations to avoid the costs, burdens, excise taxes, and restrictions associated with private foundations. However, concerns have been expressed about the potential for abuses by those who create and operate donor-advised funds and supporting organizations, prompting legislative proposals to deter abuses. IRS has found examples of abuses in these funds and organizations involving those who do not give up control of their donations and who benefit privately at the expense of the charitable interest. Although IRS has efforts to focus on such abuses, IRS examiners lack sufficient data, which complicates efforts to identify and address the noncompliance. Congress is considering proposals to require donor-advised funds and supporting organizations to annually pay out a certain percentage of their assets to serve charities, which would roughly mirror the requirement for private foundations. However, no defined way exists to calculate a payout rate for these funds and these organizations, and current Form 990 data do not allow for full or consistent analyses of the payout rate for donor- advised funds or supporting organizations. Guidance is needed on what types of support should be included in a payout rate so that the Form 990 collects the necessary data. If a payout rate requirement is not adopted, these Form 990 requirements would provide data to inform future congressional decisions about whether a requirement should be instituted. If a payout rate is adopted, the data would help in tracking compliance and determining whether the requirement may need to be adjusted. Collecting payout information on the Form 990, however, would not be possible for donor-advised funds due to limitations in annual Form 990 reporting. Starting in tax year 2003, IRS has been able to identify Forms 990 that report donor-advised fund activity. However, IRS will not have data that separate the fund activity from other activity. Adding a requirement to separately report the donor-advised fund activity from other activity on the Form 990 would allow IRS to check the payout rate as well as other fund activity that looks suspicious. IRS also has concerns with supporting organizations that do not support their supported organizations or that make loans to individuals or organizations. IRS would be better able to track the flow of funds to the charities to be supported and loan recipients if it knew their TINs, which are generally Social Security numbers for individuals or EINs for organizations. Collecting the TINs of loan recipients raises concerns about the potential costs and burdens and the protection of the TINs from unauthorized use. IRS could address these concerns by only requiring TIN reporting for loans above a certain dollar threshold and by not making the information publicly available. If the Form 990 is changed to separately report data on donor-advised fund activity, IRS should consider extending this TIN reporting to donor-advised funds. Given the concerns about payout rates for both donor-advised funds and supporting organizations, Congress should consider directing IRS to revise the Form 990 to collect sufficient information so that a consistent payout rate can be calculated for both types of charitable-giving vehicles. This information could help inform decisions about whether to adopt a minimum payout requirement and if any required rate should be adjusted. To help IRS in making these revisions, Congress should direct IRS about the types of support that should be included, as it has for private foundations. In addition, so that IRS can modify the Form 990 to require TINs of loan recipients from supporting organizations, Congress should also consider providing IRS authority to protect that information from public disclosure. To better understand the characteristics of donor-advised funds and supporting organizations and to better identify possible noncompliance, the Commissioner of Internal Revenue should, as part of the Form 990 revision process, (1) require more comprehensive reporting of donor- advised fund activity, (2) require supporting organizations to report their supported organizations’ EINs, and (3) require that the TINs for recipients of large loans be reported, if IRS is granted authority to protect the TINs from public disclosure. The Commissioner of Internal Revenue provided written comments on a draft of this report in a July 19, 2006, letter, which is reprinted in appendix V. IRS said our recommendations would help it deter abuse within tax- exempt and government entities and the misuse of such entities by third parties. IRS agreed with our two recommendations regarding requiring more comprehensive reporting of donor-advised fund activity and requiring supporting organizations to report their supported organizations’ EINs on the Form 990. IRS said it will consider these form changes as part of the Form 990 revision process, but the timing of these revisions will depend on available resources. IRS also said that reporting supported organizations’ EINs would potentially help with early identification of abuses involving promoters and donors getting back their donations in the form of a purported loan that may never be repaid. Regarding our third recommendation, which had been to require that the TINs for large-loan recipients be reported on the Form 990, IRS agreed that greater transparency and better tracking of loans are needed. However, IRS did not believe that it had the authority under current law to protect the TINs of loan recipients from public disclosure if the TINs were reported on the Form 990. As a result, we have added a matter for congressional consideration to provide IRS the authority to protect loan recipient TINs on the Form 990 from public disclosure and revised the recommendation so that if provided the authority to protect the information from public disclosure, IRS should revise the Form 990 to collect loan recipient TINs. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to the Ranking Minority Member, the Senate Committee on Finance; the Secretary of the Treasury; the Commissioner of Internal Revenue; and other interested parties. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or brostekm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Over the years, Congress has imposed various excise taxes that affect tax- exempt entities, particularly private foundations under section 501(c)(3). Public charities differ in several ways from private foundations. Public charities have broad public support and tend to provide charitable services directly to beneficiaries. Private foundations are often tightly controlled and receive a significant portion of their funds from a small number of donors, and tend to make grants directly to other organizations rather than directly provide charitable services. Since these differences create the potential for self-dealing or abuse by a small group, private foundations are subject to anti-abuse rules not applicable to public charities. In addition, both public charities and private foundations are generally prohibited from engaging in certain types of transactions. Excise taxes are to be levied on public charities and private foundations, as well as a few other types of tax-exempt entities, that violate the rules. Details on these rules and excise taxes follow. Section 4955 was added by the Revenue Act of 1987, P.L. 100-203. According to the House Report for the Act, the committee believed that the excise tax applicable to private foundations for making prohibited political expenditures (section 4945) should also apply to public charities. Section 4955 imposes an initial 10 percent excise tax on each political expenditure of a section 501(c) (3) organization. An additional 2-½ percent excise tax is imposed on the organization’s manager if the manager knew that it was a political expenditure. Political expenditures include any amounts paid or incurred by the organization in any participation or intervention in any political campaign on behalf of any candidate for public office. If an initial tax has been imposed regarding a political expenditure and that expenditure is not corrected, an additional tax equal to 100 percent of the amount is to be imposed on the organization. An additional tax equal to 50 percent of the amount of the expenditure is to be imposed on the organization’s manager if that manager refuses to agree to part or all of the correction. Section 4958 was added in 1996 by the Taxpayer Bill of Rights 2, P.L. 104- 168. According to the related House Report this excise tax was added to ensure that the advantages of tax-exempt status benefit the community and not private individuals. The act provided for this intermediate sanction (i.e., something short of a loss of tax-exemption) to be imposed when nonprofit organizations engage in transactions with certain insiders that result in private inurement. Section 4958 imposes an initial tax of 25 percent on each excess benefit transaction entered into between a disqualified person and tax-exempt organizations under sections 501(c)(3) and (4). The initial tax is to be paid by this disqualified person, including any person who at any time during the 5-year period ending on the date of the transaction was in a position to exercise substantial influence over the organization, a member of this person’s family, and a 35 percent controlled entity. Such an entity exists when a disqualified person owns more than 35 percent of the voting power of a corporation, more than 35 percent of the profit interest of a partnership, or more than 35 percent of the beneficial interest of a trust or estate. If an initial tax is imposed on the disqualified persons, an additional tax of 10 percent is to be imposed on the organization’s manager if that manager participated knowing that it was an excess benefit transaction. If the excess benefit transaction is not corrected within the taxable period, a tax equal to 200 percent of the excess benefit transaction will be imposed on the disqualified person. Private foundations are not subject to this excise tax. Section 4940 was added by the Tax Reform Act of 1969, P.L. 91-172. The related Senate Report described the excise tax as an “audit fee tax” that was believed to be necessary to cover IRS’s costs for increased supervision over private foundations under the act. Section 4940 imposes a 2 percent excise tax on the net investment income of tax-exempt private foundations. Net investment income includes income from interest, dividends, and net capital gains that is reduced by the expenses incurred to earn it. This tax is 1 percent if a private foundation meets certain distribution requirements. Private foundations that meet the requirements to be an “exempt operating foundation” are not subject to this excise tax. Among these requirements are stipulations that the foundation be publicly supported for at least 10 years and that it have a governing body that is broadly representative of the general public. Private foundations that are not exempt from taxation are subject to this excise tax and unrelated business income tax. Because a tax-exempt entity cannot operate to confer a benefit on private parties, Section 4941 was enacted by the Tax Reform Act of 1969. According to the Senate Report, generally prohibiting self-dealing transactions would minimize the need to apply the subjective arm’s-length standard that was used for loans, payments of compensation, and preferential availability of services under the 1950 amendments. Section 4941 imposes a 5 percent excise tax on acts of self-dealing between a private foundation and disqualified persons. This tax is to be paid by the disqualified person who participated in the self-dealing. An additional tax equal to 200 percent of the amount involved is to be imposed if the self- dealing is not corrected during the taxation period. A separate tax equal to 2-½ percent of the amount involved is to be imposed on the foundation’s manager if that manager knowingly participated in the act of self-dealing. If this additional tax has been imposed on the foundation manager and that manager refuses to agree to part or all of the correction, an additional tax equal to 50 percent of the amount is to be imposed. Acts of self-dealing include sales, exchanges, or leases of property; lending of money or other extensions of credit; and payment of compensation. Disqualified persons include substantial contributors to the foundation, foundation managers, an owner of more than 20 percent of a business enterprise that is a substantial contributor, and certain government officials. Section 4942 was enacted by the Tax Reform Act of 1969. Prior to it, a private foundation could lose its exemption if it failed to make distributions towards its charitable purposes instead of just accumulating income. According to the Senate Report, the committee believed that loss of exempt status as the only sanction was often ineffective or harsh, and that substantial improvement could be achieved by providing a graduation of sanctions if income is not distributed. Section 4942 imposes a 15 percent excise tax on the undistributed income of a private foundation for any taxable year in which the required amount has not been distributed before the first day of the next taxable year. If an initial tax has been imposed under section 4942 and the income remains undistributed at the end of the taxable period, a tax equal to 100 percent of the remaining undistributed amount is to be imposed. This excise tax does not apply to private operating foundations that meet distribution requirements or to the extent that the failure to distribute is due solely to an incorrect valuation of assets as long as other requirements are met. Section 4943 was enacted by the Tax Reform Act of 1969. According to its Senate Report, the use of foundations to maintain control of a business appeared to be increasing, and some who wished to use a foundation’s stock holdings to control a business were relatively unconcerned about producing income for charitable purposes. Where the charitable ownership predominated, the business could unfairly compete with businesses whose owners were required to pay taxes on their business income. The committee concluded that a limit on the extent to which a private foundation may control a business was needed. Section 4943 imposes a 5 percent excise tax on certain excess business holdings of a private foundation. Permitted holdings generally include up to 20 percent of the voting stock of an incorporated business enterprise (reduced by the percentage of the voting stock owned by all disqualified persons) and similar holdings in partnerships and other unincorporated enterprises (except sole proprietorships). If the excise tax has been imposed, foundations that fail to make the required divestiture of excess holdings above the permitted amounts are subject to an additional tax equal to 200 percent of the excess holdings. In certain cases, foundations are allowed a 5-year period to dispose of the excess holdings and may receive an additional 5-year extension. Section 4944 was enacted by the Tax Reform Act of 1969. Under prior law, a private foundation could lose its exemption if it invested in a manner that jeopardized its exempt purpose. In the Senate Report, the committee concluded that limited sanctions were preferable to the loss of exemption. Section 4944 imposes an initial 5 percent excise tax on the amount involved if a private foundation invests in a manner that jeopardizes its exempt purpose (e.g., investing with the purpose of income production or property appreciation). If this tax is imposed on the foundation, a separate 5 percent excise tax is to be imposed on the foundation manager if that manager knew that making the investment would jeopardize the foundation’s exempt purpose. If an initial tax is imposed, an additional tax equal to 25 percent of the amount of the investment is to be imposed on the foundation if the investment is not withdrawn within the taxable period. An additional tax equal to 5 percent of the amount of the investment is to be imposed on the foundation manager if the investment is not withdrawn. Section 4945 was enacted by the Tax Reform Act of 1969. Under prior law, the only sanction against prohibited political activity by a foundation was loss of exemption. The Senate Committee Report noted that the standards for determining the permissible level of political activity were so vague as to encourage subjective application of the sanction. As a result, section 4945 was added to clarify the types of impermissible activities and provide more limited sanctions. Section 4945 imposes an initial 10 percent excise tax on each taxable expenditure made by the foundation. An additional 2- ½ percent excise tax is to be imposed on the foundation manager if that manager knowingly participated in the taxable expenditure. Taxable expenditures include amounts paid to carry on propaganda or otherwise influence legislation or the outcome of a public election, or to directly or indirectly carry on a voter registration drive. If the expenditure is not corrected within the taxable period, an additional tax equal to 100 percent of the amount of the expenditure is to be imposed on the foundation and an additional tax equal to 50 percent of the amount of the expenditure is to be imposed on the foundation manager. The following tables summarize data reported on the annual Forms 990 and 990-PF filed by tax-exempt charitable entities under section 501(c)(3) of the Internal Revenue Code. The tables cover number of returns filed and the reported totals for the following characteristics: assets, revenues, expenses, contributions received, noncash contributions received, grants paid, and executive compensation. The data are categorized by supporting organizations, private foundations, and all other 501(c)(3) charities. IRS’s Publication 561 provides guidance to taxpayers on determining the value of property donated to qualified organizations. It defines “fair market value” (FMV) as the price a willing, knowledgeable buyer would pay a willing, knowledgeable seller when neither has to buy or sell. Future events that may affect the property cannot be included in FMV unless they are known at the time of the donation. In addition, past events, such as rapid growth of value over the short term, may have to be balanced out over a longer time frame for a realistic projection of value. While there is no single method to determine FMV, factors to consider include the cost or selling price, sales of comparable properties, replacement costs, and opinions of experts. Although there are many categories of noncash contributions including vehicles, used clothing, and works of art that charities may receive, donor- advised funds and supporting organizations typically receive larger noncash gifts, according to IRS. For stocks and bonds, the fair market value is the average price between the highest and lowest trading price on the date of donation. This method is only to be used for items for which an active market exists. If the item is traded on multiple exchanges, then the principle exchange must be used. In addition, large blocks of stock may require an expert to assist in the appraisal. For closely-held securities, determining FMV would include considering the company’s net worth, prospective earning power, dividend-paying capacity, and other factors such as the economic outlook in the particular industry and the company’s relative position within it, and the value of securities of companies engaged in the same or similar business. For real estate, a detailed appraisal by a qualified appraiser is required. Certain items must be included such as complete description, legal description, lot and block number, physical features, condition, dimension, zoning, and potential uses. Three valuation methods may be used— comparable sales, capitalization of income, and replacement cost new or reproduction cost minus observed depreciation (this method used alone does not determine FMV but rather tends to set the upper limit of value). IRC section 170, particularly Sec 170(f)(8), provides the basis for reporting noncash charitable contributions, such as using a qualified appraiser. The American Jobs Creation Act of 2004 also contains provisions regarding noncash contributions, including requiring the donor to attach a qualified appraisal to the tax return if the contribution is over $500,000. Taxpayers are required to file IRS Form 8283 (Noncash Charitable Contributions) if the charitable tax deduction claimed is greater than $500. Form 8283 should be filed for the tax year that the deduction is claimed. Different sections of the form are to be completed based on type of property donated and whether the amount claimed is less than or greater than $5,000. Generally appraisals are required by a qualified appraiser for donations of more than $5,000. Charitable organizations receiving donated property must file Form 8282 to report information to IRS about disposition of certain charitable deduction property made within 2 years after the donor contributed the property. According to an IRS manager, closely-held stock is a growing concern and challenge to IRS, since it can involve a broad base of taxpayers. He added that artwork, while well-publicized in terms of valuation issues, is less of a concern since the dollar amounts involved are small compared to other types of noncash contributions. In addition, the IRS manager identified the following challenges to addressing noncompliance, gathered from about 100 examination cases: donors are sometimes vague when describing the contribution on Form 8283, impeding IRS’s understanding and ability to address any problems; donors can submit Form 8283 upon examination, creating problems with detecting problems early; corporate donors of patents can structure the contribution (e.g., pay maintenance fees on the patent) so that the donee is not required to file a Form 8282 upon disposition of the contribution; no requirement exists that noncash contribution amounts reported on a donor’s tax return and a charity’s Form 990 must match; donors take improper deductions without adverse impact to the multiple appraisals of contribution value are not helpful because appraisals are very subjective. To address some of these concerns, IRS has several initiatives looking at specific types of noncash contributions, such as vehicle donations and art valuations. Additionally, IRS has a program that compares valuations of noncash contributions claimed by taxpayers (on Form 8283) with the price obtained by recipient charities when they resell the property. IRS has used data from this program to complete a study of large noncash contributions. Earlier in this report, we discussed some of the methods and materials used to publicize donor-advised funds and supporting organizations that may lead to noncompliance with tax laws. The following is a discussion of donor motivations and materials and methods that are not intended to lead to abusive schemes. To obtain this information, we spoke with 11 community foundation managers, 6 financial professionals, and 18 managers at IRS. The examples we discuss come from materials that we were referred to or located online based on our interviews, and do not necessarily represent all materials and methods used to market donor- advised funds and supporting organizations. Because donor-advised funds and supporting organizations are just two among many charitable giving options, potential donors must select an option that best suits their goals and donation plan. Factors that may influence a donor’s decision include: types of causes they wish to support, the size and type of donation they wish to give, and their desired involvement level in directing the use of their donation. For example, some donors, who desire to donate to a specific community or to have in- depth information on charities receiving their funds, might find that a donor-advised fund administered by a community foundation is an appealing option. Community foundations, which typically have a local focus, may do particularly well at performing due diligence on charities receiving their funds, according to one estate planner. Due diligence may include identifying organizations listed in IRS Publication 78, or interacting with exempt organizations that are potential recipients of funds, according to community foundation managers. To evaluate giving options in relation to their goals, donors may seek information from accountants, financial planners, lawyers, community foundations, the Internet, and tax-exempt organizations, among others. Some exempt organizations’ efforts to market donation options tend to be limited, according to a study by a nonprofit philanthropic research and development organization. This makes personal and business relationships important ways for donors to learn about donor-advised funds and supporting organizations, according to community foundation managers. Because many donor-advised funds are administered by community foundations or are housed in charities affiliated with commercial investment firms, such as Vanguard and Fidelity, these relationships may be particularly important sources for introducing donors to donor-advised funds, according to a community foundation manager interviewed by The Chronicle of Philanthropy. According to several community foundation managers, many donors to community foundation donor advised funds are referred from professional advisers. Recognizing the importance of these relationships, some community foundations have launched specific outreach efforts aimed at financial advisers and other professionals who could refer donor-advised fund clients. In addition to discussions with professionals, donors may encounter or be presented with a variety of material explaining charitable giving options. Material may contain details of giving options in relation to both tax incentives to the donor and charitable benefits for the exempt organization. Some firms advertise services for clients in magazines or national publications, according to IRS managers and an estate lawyer, while others depend on the Internet. Descriptions of professional services can include outlines of charitable giving options, some of which attempt to explain giving options based on the legal, practical, and charitable characteristics of each option. For example, some community foundations, philanthropy organizations, and investment firms provide tables or descriptions comparing various combinations of donor-advised funds, supporting organizations, private foundations, and other donation options. These tables describe and compare levels of donor involvement, tax status, deductions by asset type, start-up costs, and administrative requirements. Other material outlines the steps and requirements necessary to establish a donor-advised fund or supporting organization. In addition to the contact named above, Tom Short, Assistant Director; Mark Bondo; Marta Chaffee; Elizabeth Fan; Evan Gilman; Nancy Hess; Shirley Jones; Donna Miller; John Mingus; Coltrane Stansbury; Paul Thacker; and Lindsay Welter made key contributions to this report. An individual such as an officer, board member, or other persons able to exercise substantial influence over a tax-exempt organization. Donors to donor-advised funds are rarely considered to be insiders, while donors to supporting organizations can sometimes be insiders if they also serve on the supported organization’s board. An organization, usually a nonoperating charity, providing charitable support through grants to local or regional communities. Typically a community foundation will aggregate contributions from local residents, build endowments, and distribute grants to communities. An individual, defined in IRC section 4946, who may have a significant conflict of interest with a charity due to financial, executive, or voting powers, such as those held by donors, officers, or directors. The definition applies to individuals involved with private foundations and supporting organizations, and has a limited application to public charities that are not supporting organizations. Charitable giving accounts that are held by a public charity. A donor contributes to an individual account within a charity’s donor-advised fund, and maintains an advisory role on distribution of the funds. No statutory or regulatory definition currently exists. Authority exerted by a donor that exceeds what is allowable for a donor- advised fund or supporting organization. Donor control includes direct or indirect power over decisions regarding an organization’s assets or operations. A transaction, directly or indirectly, between a disqualified person and a tax-exempt organization that results in economic benefit to the disqualified person exceeding the value of service to the organization. Subject to excise taxation under IRC section 4958. A tax imposed on an act, occupation, privilege, manufacture, sale, or consumption and that is usually designed to influence taxpayer behavior. A set of procedures used by private foundations to ensure responsible use of grants to charities. The assessment may include: a pre-grant inquiry on the recipient charity, establishment of commitments for grant recipient, investment requirements, or agreements on actions if agreements are violated. Excise taxes that provide a corrective remedy for excess benefit transactions. The excise taxes are paid by the disqualified person, as defined in IRC 4958, who receives excess benefit, or by a charity manager who knowingly participates in the transaction. The transfer or use of a charity’s assets or income for the benefit of a charity’s insiders. Inurement is a specific form of private benefit, and is prohibited for all 501(c)(3) organizations. Application for Recognition of Exemption under IRC Section 501(c)(3) that organizations must file in order to receive tax-exempt status. IRS information return that public charities are required to file annually unless the organization is a church or entity associated with a church, a certain type of governmental unit affiliate, or falls below certain gross receipts thresholds. IRS information return that private foundations must file annually. An asset other than cash donated to a tax-exempt organization, for example, stocks, bonds, vehicles, artwork, or real estate. An organization’s expenditures to individuals or charities for certain operational or administrative functions. Private foundations must distribute about 5 percent of the average market value of their noncharitable use assets, generally stocks or other investments that compose the foundation’s endowment; donor-advised funds and supporting organizations do not have to meet a minimum payout. A 501(c)(3) organization, further defined in IRC section 509(a), that does not qualify as a public charity. Generally, private foundation rules and regulations are more complex and limiting than those for public charities. The transfer or use of a charity’s assets or income, or the conferment of undue advantage, to private persons who are not necessarily charity insiders. Some private benefit is permitted, but it must not be more than incidental to the charitable purpose being served. Private benefit is a broad term that includes inurement and applies to all 501(c)(3) organizations. A tax-exempt organization defined in IRC section 501(c)(3) that receives broad financial support or is a supporting organization. Public charities have fewer legal requirements than private foundations. A corrective action that removes a charity’s tax-exempt charter. Revocation is used for violations such as inurement, performing nonexempt activities, operating in a commercial manner, and operating for private use. A tax-exempt organization operated for a charitable purpose. Purposes considered to be charitable include serving the poor and distressed; advancing religious, educational, or scientific endeavors; and protecting human or civil rights. All 501(c)(3) organizations are considered either public charities or private charities, known as private foundations. Contributions to charities are tax deductible under IRC section 170. Transactions, either direct or indirect, made between a private foundation and disqualified person that involve (1) sale, exchange, or lease of property; (2) lending of money or other extensions of credit; (3) providing goods, services, or facilities; (4) paying compensation to or reimbursing expenses of a disqualified person; (5) transferring foundation income or assets to, or for the use or benefit of, a disqualified person; and (6) certain agreements to make payments of money or property to government officials. A tax-exempt organization that receives funds or services from a supporting organization. operated in connection with the supported organization(s). | Donor-advised funds and supporting organizations are two charitable-giving options that have received attention from Congress and the Internal Revenue Service (IRS) for their potential to facilitate noncompliance with tax law. As requested, GAO is providing information on donor-advised funds and supporting organizations related to (1) federal laws and regulations, compared to private foundations; (2) financial and organizational characteristics; and (3) types of noncompliance and promotion methods and challenges identifying them. Donor-advised funds, supporting organizations, and private foundations are all tax-exempt charitable-giving vehicles. Donor-advised funds are separate accounts held by a public charity to receive contributions from donors who may recommend, but not control, charitable distributions from the account. Supporting organizations are public charities that are to carry out their tax-exempt purpose by supporting one or more tax-exempt organizations, usually other public charities. Compared with private foundations, donor-advised funds and supporting organizations give donors less control over how their donation will be used but provide donors more favorable tax deductions, lower administration costs, less IRS oversight, and fewer reporting requirements. Donor-advised funds hold billions of dollars in assets, and supporting organizations and private foundations hold hundreds of billions of dollars in assets. Public charities and private foundations must annually file an IRS Form 990 or Form 990-PF, respectively, to report their activities. However, donor-advised fund data are limited because organizations that maintain the funds are not required to separately report fund data from other financial data on Form 990. Although some supporting organization characteristics can be determined from Form 990 data, other characteristics, such as the rate at which payments are made to charities and details about the recipients of loans from the organization, cannot be reliably determined. Concerns have arisen about the "payout" rate to charities, and Congress is considering a minimum payout requirement, similar to the one for private foundations. Further, supporting organizations are not required to report their supported organizations' identification numbers, making it more difficult to track the relationship between organizations. To collect additional data, IRS revised Form 990 for 2003 and 2005 and is considering further revisions, but no firm plans have been determined. According to IRS managers, examinations reveal that some donor-advised funds and supporting organizations are used in abusive schemes to unallowably benefit donors or related parties or give donors excess control of charitable assets and operations. In some cases, IRS is able to clearly determine noncompliance and assign appropriate corrective actions. However, in other cases, IRS faces challenges gathering evidence or addressing activities that do not seem to benefit charities, but do not violate any law or regulation, such as when a supporting organization loans money, at market rate, to a donor, director, or officer of the organization. Promoters, who are individuals or entities who facilitate abusive schemes, further complicate IRS's examination efforts. |
The permanent provisions of the Brady Handgun Violence Prevention Act took effect on November 30, 1998. Under the Brady Act, before a federally licensed firearms dealer can transfer a firearm to an unlicensed individual, the dealer must request a background check through NICS to determine whether the prospective firearm transfer would violate federal or state law. The Brady Act’s implementing regulations also provide for conducting NICS checks on individuals seeking to obtain permits to possess, acquire, or carry firearms. According to the Department of Justice, under current law, inclusion on a terrorist watch list is not a stand- alone factor that would prohibit a person from receiving or possessing a firearm. Thus, if no other federal or state prohibitors exist, a known or suspected terrorist can legally purchase firearms. Approximately 8.5 million background checks are run through NICS each year, of which about one-half are processed by the FBI’s NICS Section and one-half by designated state and local criminal justice agencies. Under federal and state requirements, prospective firearms purchasers must provide information that is needed to initiate a NICS background check. For example, in order to receive a firearm from a licensed dealer, federal regulations require an individual to complete a Firearms Transaction Record (ATF Form 4473). Among other things, this form requires prospective purchasers to provide the following descriptive data: name, residence address, place of birth, height and weight, sex, date of birth, race, state of residence, country of citizenship, and alien registration number (for non-U.S. citizens). A Social Security number is optional. Firearms dealers use the Form 4473 to record information about the firearms transaction, including the type of firearm(s) to be transferred (e.g., handgun or long gun); the response provided by the FBI’s NICS Section or state agency (e.g., proceed or denied); and information specifically identifying each firearm to be transferred (e.g., manufacturer, model, and serial number), which shows whether the transaction involves the purchase of multiple firearms. Individuals applying for state permits to possess, acquire, or carry firearms also are required to provide personal descriptive data on a state permit application. State laws vary in regard to the types of information required from permit applicants. The purpose of the NICS background check is to search for the existence of a prohibitor that would disqualify a potential buyer from purchasing a firearm pursuant to federal or state law. During the NICS check, descriptive data provided by an individual—such as name and date of birth—are used to search databases containing criminal history and other records supplied by federal, state, and local agencies. One of the databases searched by NICS is the FBI’s National Crime Information Center database, which contains criminal justice information (e.g., names of persons who have outstanding warrants) and also includes records on persons identified as known or suspected members of terrorist organizations. The terrorist-related records are maintained in the National Crime Information Center’s Violent Gang and Terrorist Organization File (VGTOF), which was designed to provide law enforcement personnel with the means to exchange information on members of violent gangs and terrorist organizations. Although NICS checks have included searches of terrorist records in VGTOF, NICS personnel at the FBI and state agencies historically did not receive notice when there were hits on these records. The FBI blocked the VGTOF responses (i.e., the responses were not provided to NICS personnel) under the reasoning that VGTOF records contain no information that would legally prohibit the transfer of a firearm under federal or state law. However, in November 2002, the FBI began an audit of NICS transactions where information indicated the individual was an alien, including transactions involving VGTOF records. In one instance involving a VGTOF record, the audit revealed that an FBI field agent had knowledge of prohibiting information not yet entered into the automated databases checked by NICS. As a result, in November 2003, the Department of Justice—citing Brady Act authorities—directed the FBI to revise NICS procedures to better ensure that subjects of VGTOF records who have disqualifying factors do not receive firearms in violation of applicable federal or state law. Specifically, the Brady Act authority cited allows the FBI up to 3 business days to check for information demonstrating that a prospective buyer is prohibited by law from possessing or receiving a firearm. Under revised procedures effective February 3, 2004, FBI and state personnel who handle NICS transactions began receiving notice of transactions that hit on VGTOF records. Also, under the revised procedures, all NICS transactions with potential or valid matches to VGTOF records are automatically delayed to give NICS personnel the chance to further research the transaction before a response (e.g., proceed or denied) is given to the initiator of the background check. For all potential or valid matches with terrorist records in VGTOF, NICS personnel are to begin their research by contacting the Terrorist Screening Center (TSC) to verify that the subject of the NICS transaction matches the subject of the VGTOF record, based on the name and other descriptors. For confirmed matches, NICS personnel are to determine whether federal counterterrorism officials (e.g., FBI field agents) are aware of any information that would prohibit the individual by law from receiving or possessing a firearm. For example, FBI field agents could have information not yet posted to databases checked by NICS showing the person is an alien illegally or unlawfully in the United States. If counterterrorism officials do not provide any prohibiting information, and there are no other records in the databases checked by NICS showing the individual to be prohibited, NICS personnel are to advise the initiator of the background check that the transaction may proceed. If the NICS background check is not completed within 3 business days, the gun dealer may transfer the firearm (unless state law provides otherwise). Designated state and local criminal justice agencies are responsible for conducting background checks in accordance with NICS policies and procedures. However, the Attorney General and the FBI ultimately are responsible for managing the overall NICS program. Thus, the FBI’s Criminal Justice Information Services Division conducts audits of the states’ compliance with federally established NICS regulations and guidelines. Also, the FBI is a lead U.S. law enforcement agency responsible for investigating terrorism-related matters. During presale screening of prospective firearms purchasers, NICS searches terrorist watch list records generated by numerous federal agencies, including components of the Departments of Justice, State, and Homeland Security. Applicable records are consolidated by TSC, which then makes them available for certain uses or purposes, such as inclusion in VGTOF—a database routinely searched during NICS background checks. Terrorist watch lists are maintained by numerous federal agencies. These lists contain varying types of data, from biographical data—such as a person’s name and date of birth—to biometric data—such as fingerprints. Our April 2003 report identified 12 terrorist or criminal watch lists that were maintained by nine federal agencies. Table 1 shows the 12 watch lists and the current agencies that maintain them. At the time we issued our April 2003 report, federal agencies did not have a consistent and uniform approach to sharing terrorist watch list information. TSC was established in September 2003 to consolidate the government’s approach to terrorism screening and provide for the appropriate and lawful use of terrorism information. In addition to consolidating terrorist watch list records, TSC serves as a single point of contact for law enforcement authorities requesting assistance in the identification of subjects with possible ties to terrorism. TSC has access to supporting information behind terrorist records and can help resolve issues regarding identification. TSC also coordinates with the FBI’s Counterterrorism Division to help ensure appropriate follow-up actions are taken. TSC receives the vast majority of its information about known or suspected terrorists from the Terrorist Threat Integration Center, which assembles and analyzes information from a wide range of sources. In addition, the FBI provides TSC with information about purely domestic terrorism (i.e., activities having no connection to international terrorism). According to TSC officials, from December 1, 2003—the day TSC achieved an initial operating capability—to March 12, 2004, TSC consolidated information from 10 of the 12 watch lists shown in table 1 into a terrorist- screening database. The officials noted that the database has routinely been updated to add new information. Further, TSC officials told us that information from the remaining 2 watch lists—the U.S. Immigration and Customs Enforcement’s Automated Biometric Identification System and the FBI’s Integrated Automated Fingerprint Identification System—will be added to the consolidated database at a future date not yet determined. A provision in the Intelligence Authorization Act for Fiscal Year 2004 required the President to submit a report to Congress by September 16, 2004, on the operations of TSC. Among other things, this report was to include a determination of whether the data from all the watch lists enumerated in our April 2003 report have been incorporated into the consolidated terrorist-screening database; a determination of whether there remain any relevant databases not yet part of the consolidated database; and a schedule setting out the dates by which identified databases—not yet part of the consolidated database—would be integrated. As of November 2004, the report on TSC operations had not been submitted to Congress. TSC, through the participation of the Departments of Homeland Security, Justice, and State and intelligence community representatives, determines what information in the terrorist-screening database will be made available for which types of screening purposes. In November 2003, the Department of Justice directed the FBI’s NICS Section to develop appropriate procedures for NICS searches of TSC records when the center and its consolidated watch list database were established and operational. In accordance with this directive, the FBI and TSC have implemented procedures that allow all eligible records in the center’s consolidated terrorist-screening database to be added to VGTOF and searched during NICS background checks. According to FBI and TSC officials, since December 2003, eligible records from the terrorist- screening database have been added to VGTOF and searched during NICS background checks. For the period February 3 through June 30, 2004, FBI data and our interviews with state agency officials indicated that 44 NICS transactions resulted in valid matches with terrorist records in VGTOF. Of this total, 35 transactions were allowed to proceed because the background checks found no prohibiting information, such as felony convictions or illegal immigrant status, as shown in table 2. According to FBI data and our interviews with state agency officials, the 44 total valid matches shown in table 2 involved 36 different individuals (31 individuals had one match and 5 individuals had more than one match). We could not determine whether the 5 individuals with more than one match had actually attempted to purchase firearms or acquire firearms permits on separate occasions, in part because information related to applicable NICS records was not available due to legal requirements for destroying information on transactions that are allowed to proceed. Our work indicated that the multiple transactions could have, for example, been run for administrative purposes (e.g., rechecks). The FBI’s revised procedures for handling NICS transactions with valid matches to terrorist watch list records—i.e., to delay the transactions to give NICS personnel the chance to further research for prohibitors—have successfully resulted in the denial of firearms transactions involving known or suspected terrorists who have disqualifying factors. Specifically, two of the six denied transactions shown in table 2 were based on prohibiting information provided by FBI field agents that had not yet been entered in automated databases checked by NICS. According to agency officials in the two states that handled the transactions, FBI field agents provided information showing that one of the individuals was judged to be mentally defective and the other individual was an alien illegally or unlawfully in the United States. Based on this information, both firearm transfers were denied. The vast majority of NICS transactions that generated initial hits on terrorist records in VGTOF did not result in valid matches. Specifically, during the period in which the 44 valid matches were identified—February 3 through June 30, 2004—officials from the FBI’s NICS Section estimated that approximately 650 NICS transactions generated initial hits on terrorist records in VGTOF. The high rate of potential matches returned—i.e., VGTOF records returned as potential matches based upon the data provided by the prospective purchaser—is due to the expanded search parameters used to compare the subject of a background check with a VGTOF record. An FBI NICS Section official told us that by comparing data from the NICS transaction (e.g., name, date of birth, and Social Security number) with data from the VGTOF record, it generally is easy to determine if there is a potential or valid match. The official told us that NICS personnel drop the false hits from further consideration and follow up only on transactions considered to be potential or valid matches. A false hit, for example, could occur when the subject of a NICS transaction and the subject of a VGTOF record have the same or a similar name but a different date of birth and Social Security number. As table 2 shows, the 44 NICS transactions with valid matches to terrorist records in VGTOF were processed by the FBI’s NICS Section and 11 states during the period February 3 through June 30, 2004. In December 2004, FBI officials told us that during the 4 months following June 2004—that is, during July through October 2004—the FBI’s NICS Section handled an additional 14 transactions with valid matches to terrorist records in VGTOF. Of the 14 transactions with valid matches, FBI officials told us that 12 were allowed to proceed because the background checks found no prohibiting information, and 2 were denied based on prohibiting information. It was beyond the scope of our work to assess the reliability or accuracy of the additional data. Federal and state procedures—developed and disseminated under the Department of Justice’s direction—contain general guidelines that allow FBI and state personnel to share information from NICS transactions with federal counterterrorism officials, in the pursuit of potentially prohibiting information about a prospective gun buyer. However, the procedures do not address the specific types of information that can or should be provided or the sources from which such information can be obtained. Justice’s position is that the types of information that can be routinely provided generally are limited to the information contained within the NICS database. Justice noted, however, that NICS personnel can request additional information from a gun dealer or from a law enforcement agency processing a firearms permit application, if that information is requested by a counterterrorism official in the legitimate pursuit of establishing a match between the prospective gun buyer and a VGTOF record. Most state personnel told us that—at the request of counterterrorism officials—the state would contact the gun dealer or refer to the state permit application to obtain and provide all available information related to a NICS transaction. FBI counterterrorism officials told us that receiving all available personal identifying information and other details from terrorism-related NICS transactions could be useful in conducting investigations. As mentioned previously, for all potential or valid matches with terrorist records in VGTOF, NICS personnel are to begin their research by contacting TSC to verify the match. According to the procedures used by the FBI’s NICS Section, during the screening process, TSC will ask NICS staff to provide “all information available in the transaction,” including the location of the firearms dealer, in the pursuit of identifying a valid match. If a coordinated effort by TSC and FBI NICS Section staff determines that the subject of the NICS transaction appears to match a terrorist record in VGTOF—based on the name and other descriptors—TSC is to refer the NICS Section staff to the FBI’s Counterterrorism Division for follow-up. Further, the procedures note that there will be instances when NICS Section staff are contacted directly by a case agent, who will ask the NICS Section staff to share “additional information from the transaction or provide necessary information to complete the transaction.” The Department of Justice’s position is that information from the NICS database is not to be used for general law enforcement purposes. Justice noted, however, that information about a NICS transaction can be shared with law enforcement agents or other government agencies in the legitimate pursuit of establishing a match between the prospective gun buyer and a VGTOF record and in the search for information that could prohibit the firearm transfer. Justice explained that the purpose of NICS is to determine the lawfulness of proposed gun transactions, not to provide law enforcement agents with intelligence about lawful gun purchases by persons of investigative interest. Thus, Justice told us that as set forth in NICS procedures, all information about a transaction hitting on a VGTOF record can be shared with field personnel in the pursuit of establishing whether the person seeking to buy the gun is the same person with the terrorist record in VGTOF. Justice added that this is done during the search for prohibiting information about the person whose name hit on the VGTOF record. Further, Justice noted that information about NICS transactions also can be and routinely is shared by NICS with law enforcement agencies when the information indicates a violation, or suspected violation, of law or regulation. According to Justice, the types of information that can be routinely shared under NICS procedures generally are limited to the information collected by or contained within the NICS database. Specifically, Justice noted that—in verifying a match and determining whether prohibiting information exists—the following information can be routinely shared with TSC and counterterrorism officials: certain biographical data from the ATF Form 4473 collected from a gun dealer for purposes of running a NICS check (e.g., name, date of birth, race, sex, and state of residence); the specific date and time of the transaction; the name, street address, and phone number of the gun dealer; and the type of firearm (e.g., handgun or long gun), if relevant to helping confirm identity. Justice told us that additional information contained in the ATF Form 4473, such as residence address or the number and make and model of guns being sold, is not required or necessary to run a NICS check. Justice noted, however, that there are times when NICS personnel will contact a gun dealer and request a residence address on a person who is determined to be prohibited from purchasing firearms—such as when there is a hit on a prohibiting arrest warrant record—so that the information can be supplied to a law enforcement agency to enforce the warrant. Similarly, Justice told us that NICS procedures do not prohibit NICS personnel from requesting a residence address from a gun dealer—or from a law enforcement agency issuing a firearms permit in the case of a permit check—if that information is requested by a counterterrorism official in the pursuit of establishing a match between the gun buyer and the VGTOF record. Justice noted that gun dealers are not legally obligated under either NICS or ATF regulations to provide this information to NICS personnel but frequently do cooperate and provide the residence information when specifically requested by NICS personnel. Further, Justice told us that in cases in which a match is established and the field does not have the residence address or wants the address or other additional information on the Form 4473 regarding a “proceeded” transaction, FBI personnel can then coordinate with ATF to request the information from the gun dealer’s records without a warrant. Specifically, Justice cited provisions in the Gun Control Act of 1968, as amended, that give the Attorney General the authority to inspect or examine the records of a gun dealer without a warrant “in the course of a reasonable inquiry during the course of a criminal investigation of a person or persons other than the licensee.” Justice explained that unless the person is prohibited or there is an indication of a violation or potential violation of law, FBI NICS personnel do not perform this investigative function for the field. FBI field personnel can, however, get the investigative information from gun dealers through coordination with ATF. We recognize that current procedures allow NICS personnel to share “all information available in the transaction” with TSC or counterterrorism officials, in the pursuit of identifying a true match and the discovery of information that is prohibiting. However, given Justice’s interpretation, we believe that clarifying the procedures would help ensure that the maximum amount of allowable information from terrorism-related NICS transactions is consistently shared with counterterrorism officials. For example, under current procedures, it is not clear if the types of information that can or should be routinely shared are limited to the information contained within the NICS database or if additional information can be requested from the gun dealer or from the law enforcement agency processing a permit application. The FBI’s NICS Section did not maintain data on the types of information it shared with TSC or counterterrorism officials to (1) verify matches between NICS transactions and VGTOF records or (2) pursue the existence of firearm possession prohibitors. According to the NICS Section, such data are not maintained because NICS procedures provide for the sharing of all information available from the transaction, including the location of the gun dealer, in the pursuit of identifying a true match. The NICS Section told us that data required to initiate a NICS check—such as name, date of birth, sex, race, state of residence, citizenship, and purpose code (e.g., firearm check or permit check)—are captured in the NICS database and shared on every NICS transaction. A NICS Section official told us that the specific or approximate date and time of each transaction also is consistently shared with TSC. TSC did maintain data on the types of information shared by the NICS Section. Specifically, in verifying matches, TSC data showed that NICS Section staff shared basic identifying information about the prospective purchasers (e.g., name, date of birth, and Social Security number). However, TSC data showed that NICS Section staff did not consistently share the specific location or phone number of the gun dealer. According to the procedures used by the FBI’s NICS Section, in the pursuit of identifying a valid match, TSC will ask NICS staff to provide the location of the gun dealer. The NICS Section told us that this includes the specific location and phone number of the gun dealer. According to TSC officials, once the FBI’s NICS Section has shared information on an identity match and TSC verifies the match, the information provided by the NICS Section is forwarded to the FBI’s Counterterrorism Division. The Counterterrorism Division is to then contact the NICS Section to follow up on the match. If the NICS Section does not receive a response from the Counterterrorism Division, the NICS Section is to aggressively pursue contacting the division to resolve the transaction. Counterterrorism Division officials told us the information provided by the NICS Section is routinely shared with field agents familiar with the terrorist records in VGTOF. NICS Section officials also told us that for each transaction with a valid match to a VGTOF record, NICS Section staff talked directly to a field agent to pursue prohibiting information. The NICS Section did not maintain data on what, if any, additional information from the NICS transactions was shared during these discussions. However, NICS Section officials told us that in no cases did NICS staff contact the gun dealer to obtain—and provide to counterterrorism officials—additional information about the firearm transaction (e.g., information such as the prospective purchaser’s residence address) that was not submitted as part of the initial NICS check or already contained within NICS. The NICS Section was aware of one instance in which NICS staff was asked by a counterterrorism official to obtain address information to assist in determining whether a VGTOF hit was a valid match. In that case— involving a firearm permit check—the NICS staff was able to get residence address information from the law enforcement agency processing the permit application and provide it to the counterterrorism official. According to the FBI-disseminated procedures used by state agencies, in the process of contacting TSC, state staff are to share “all information available in the transaction,” including the location of the firearms dealer, in the pursuit of identifying a true match and determining the existence of prohibiting information. If TSC and state staff make an identity match, TSC is to refer the state staff to the FBI’s Counterterrorism Division for follow-up. Unlike the procedures used by the FBI’s NICS Section, the state agency procedures do not address whether there will be instances when state staff are to be contacted directly by a case agent, or what additional information from the NICS transaction could be shared during such contacts. Most state agency officials we contacted told us they interpreted the procedures as allowing them to share all available information related to a NICS transaction requested by counterterrorism officials, including any information contained on the forms used to purchase firearms or apply for firearms permits. Also, most state agency officials told us they were not aware of any restrictions or specific FBI guidance on the types of information that could or could not be shared with counterterrorism officials. According to the FBI’s NICS Section, the procedures used by state agencies note that in the process of contacting TSC, state staff will share all information available in the transaction in the pursuit of identifying a true match and the discovery of information that is prohibiting. As mentioned previously, we believe that clarifying the procedures would help ensure that the maximum amount of allowable information from terrorism-related NICS transactions is consistently shared with counterterrorism officials. The state agencies we contacted did not maintain data on the types of information they shared with TSC or counterterrorism officials to verify matches between NICS transactions and VGTOF records or pursue prohibiting information. However, in verifying matches, TSC data showed that state agency staff shared basic identifying information about the prospective purchasers (e.g., name, date of birth, and Social Security number). TSC data also showed that state agency staff did not consistently share the specific location or phone number of the gun dealer. TSC officials told us they basically can identify the date and time of a firearm transaction because TSC records the date and time NICS staff call TSC, which occurs very shortly after the gun dealer initiates the NICS check. TSC and FBI Counterterrorism Division officials told us they handle state agency referrals the same way as they handle referrals from the FBI’s NICS Section. Most of the state agency officials we contacted told us that if requested by counterterrorism officials (e.g., FBI field agents), state agency staff would either call the gun dealer or refer to the state permit application to obtain and provide all available information related to a NICS transaction. This information could include the prospective purchaser’s residence address and the type and number of firearms involved in the transaction. Officials in three states told us that state staff had shared the prospective purchaser’s residence address with FBI field agents. In one of the three cases, the field agent was interested in the residence address because the individual was in the country illegally and was wanted for deportation. In its written comments on a draft of this report, Justice noted that in the case of the individual who was in the country illegally, because the individual was a prohibited person, there was no restriction on obtaining and providing the additional information about the denied transaction to a law enforcement agency after the identity was already established. Justice also noted that regarding the sharing of information from state firearm permit applications, there is no Brady Act limitation on the state supplying transaction information to field agents for investigative purposes after identity is established, as the use and dissemination of state firearm permit information is governed by state law. According to officials from the FBI’s Counterterrorism Division, personal identifying information and other details about NICS transactions with valid matches to terrorist records in VGTOF could be useful to FBI field agents in conducting terrorism investigations. Specifically, the officials noted the potential usefulness of locator information, such as the prospective purchaser’s residence address, the date and time of the transaction, and the specific location of the gun dealer at which the transaction took place. The officials also told us that information on the type of firearm(s) involved in the transaction and whether the transaction involved the purchase of multiple firearms could also be useful to field agents. According to one official, in general, agents would want as much information as possible that could assist investigations. The FBI’s NICS Section noted, however, that NICS procedures provide for sharing information only when it is relevant to determining a true match between a NICS transaction and a terrorist record in VGTOF. Although the Attorney General and the FBI ultimately are responsible for managing NICS, the FBI has not routinely monitored the states’ handling of terrorism-related background checks. For example, the FBI does not know the number and results of terrorism-related NICS transactions handled by state agencies since June 30, 2004. Also, the FBI has not routinely assessed the extent to which applicable state agencies have implemented and followed procedures for handling NICS transactions involving terrorist records in VGTOF. The FBI’s plans call for conducting audits of the states’ compliance with the procedures every 3 years. Our work revealed several issues state agencies have encountered in handling NICS transactions involving terrorist records in VGTOF, including delays in implementing procedures and a mishandled transaction. The FBI has not routinely monitored the states’ handling of NICS transactions involving terrorist records in VGTOF. For example, in response to our request for information—covering February 3 through June 30, 2004—the FBI’s NICS Section reviewed all state NICS transactions that hit on VGTOF records during this period to identify potential matches. We used this information to follow up with state agencies and create table 2 in this report. However, since June 30, 2004, the FBI’s NICS Section has not tracked or otherwise attempted to collect information on the number of NICS transactions handled by state agencies that have resulted in valid matches with terrorist records in VGTOF or whether such transactions were approved or denied. NICS Section officials told us that while the NICS Section does not have aggregate data, FBI officials at TSC and the FBI’s Counterterrorism Division are aware of valid-match transactions that state agencies handle. Given the significance of valid matches, we believe it would be useful for the FBI’s NICS Section to have aggregate data on the number and results of terrorism-related NICS transactions handled by state agencies, particularly if the data indicate that known or suspected terrorists may be receiving firearms. In response to our inquiries, in October 2004, Justice and FBI NICS Section officials told us they plan to study the need for information on state NICS transactions with valid matches to terrorist records in VGTOF and the means by which such information could be obtained. Also, while the FBI has taken steps to notify state agencies about the revised procedures for handling NICS transactions involving VGTOF records—including periodic teleconferences and presentations at a May 2004 NICS User Conference—the FBI has not routinely assessed the extent to which states have implemented and followed the procedures. According to the FBI, the NICS Section performed an assessment of all NICS transactions involving VGTOF records from February 3, 2004 (the day the block on VGTOF records was removed) to March 22, 2004, in order to assess the extent to which the states implemented and followed procedures. For example, a NICS Section official told us that NICS personnel called state agencies to make sure they contacted TSC to verify matches and also contacted counterterrorism officials to pursue prohibiting information. However, according to the NICS Section, the assessment concluded on March 23, 2004, because NICS Section personnel could not fully assess the reliability or accuracy of the information provided by the states. Officials from two states told us that additional FBI oversight could help ensure that applicable procedures are followed. One of the state officials told us that such FBI oversight could be particularly important since NICS transactions with valid matches to VGTOF records are rare and there could be turnover of state personnel who process the transactions. As part of routine state audits the FBI conducts every 3 years, the FBI plans to assess the states’ handling of terrorism-related NICS transactions. Specifically, every 3 years, the FBI plans to audit whether designated state and local criminal justice agencies are utilizing the written procedures for processing NICS transactions involving VGTOF records. Moreover, for states with a decentralized structure for processing NICS transactions— i.e., states with multiple local law enforcement entities that conduct background checks (rather than one central agency)—the goal of the audit is to determine if local law enforcement agencies conducting the checks have in fact received the written procedures, and if so, whether the procedures are being followed. However, given that the relevant NICS transactions involve known or suspected terrorists who could pose homeland security risks, we believe that a 3-year audit cycle is not sufficient. Also, under a 3-year audit cycle, information from NICS transactions with valid matches to terrorist records in VGTOF may have been destroyed pursuant to federal or state requirements and therefore may not be available for review. Further, a 3-year audit cycle may not be sufficient help ensure the timely identification and resolution of issues state agencies may encounter in handling terrorism-related NICS transactions. State agencies have encountered several issues in handling NICS transactions involving terrorist records in VGTOF. Specifically, of the 11 states we contacted, 9 states experienced one or more of the following issues: 4 states had delays in implementing procedures, 3 states questioned whether state task forces were notified, 2 states had problems receiving responses from FBI field agents, 1 state mishandled a transaction, and 3 states raised concerns about notifications. Four of the 11 states we contacted had delays of 3 months or more in implementing NICS procedures for processing transactions that hit on VGTOF records—procedures that were to have been effective on February 3, 2004. Each of the 4 states processed one NICS transaction with a valid match to terrorist records in VGTOF before becoming aware of and implementing the new procedures. In processing the transactions, our work indicated that at least 3 of the 4 states did not contact TSC, as required by the procedures. The fourth state did not have information on how the transaction was processed. Although our work indicated that the FBI provided the new procedures to state agencies in January 2004, 1 of the 4 states did not implement the procedures until after a state official attended the May 2004 NICS User Conference. Officials in the other 3 states were not aware of the new procedures at the time we made our initial contacts with them in June 2004 (2 states) and August 2004 (1 state). Subsequent discussions with officials in 2 of the 3 states indicated the new procedures have been implemented. In November 2004, an official in the third state told us the procedures had not yet been implemented. Officials in 3 of the 11 states told us they believed their respective state’s homeland security or terrorism task forces should be notified when a suspected terrorist attempts to purchase a firearm in their state, but the officials said they did not know if TSC or the FBI provided such notices. Officials from the FBI’s Counterterrorism Division did not know the extent to which FBI field agents notified state and local task forces about terrorism-related NICS transactions, but the officials told us that such notifications likely are made on a need-to-know basis. Justice and FBI officials acknowledged that this issue warrants further consideration. Officials in 2 of the 11 states told us that in the pursuit of prohibiting information, their respective states had problems receiving responses from FBI field agents. These problems led to delays in each state’s ability to resolve one NICS transaction with a valid match to a terrorist record in VGTOF. According to state officials, under the respective state’s laws, the two transactions were not allowed to proceed during the delays, even though prohibiting information had not been identified. The two transactions were resolved as follows: In response to our inquiries, in November 2004, an analyst in one of the states contacted an FBI field agent, who told the analyst that the subject of the background check had been removed from VGTOF. A state official told us the NICS transaction was in a delay status for nearly 10 months. Regarding the other state, the NICS transaction was in an unresolved status for a period of time specified by state law, after which it was automatically denied. According to state officials, a state analyst made initial contact with an FBI field agent, who said he would call the analyst back. The state officials told us that the analyst made several follow-up calls to the agent without receiving a response. As of November 2004, the FBI had not responded to our request for information regarding the issues or circumstances as to why the FBI field agents had not contacted the two states’ analysts. One of the 11 states mishandled a NICS transaction with a valid match to a terrorist record in VGTOF. Specifically, although the state received notification of the VGTOF hit, the information was not relayed to state staff responsible for processing NICS transactions. Consequently, the transaction was approved without contacting TSC or FBI counterterrorism officials. We informed the state that the FBI’s NICS Section had identified the transaction as matching a VGTOF record. Subsequently, state personnel contacted TSC and an FBI field agent, who determined that prohibiting information did not exist. State officials told us that to help prevent future oversights, the state has revised its internal procedures for handling NICS transactions that hit on VGTOF records. Officials in 3 of the 11 states told us that the automatic (computer- generated) notification of NICS transactions that hit on a certain (sensitive) category of terrorist records in VGTOF is not adequately visible to system users and could be missed by state personnel processing NICS transactions. The FBI has taken steps to address this issue and plans to implement computer system enhancements in June 2005. Under revised procedures effective February 3, 2004, all NICS transactions with potential or valid matches to terrorist watch list records in VGTOF are automatically delayed to give NICS personnel at the FBI and applicable state agencies an opportunity to further research the transactions for prohibiting information. The primary purpose of the revised procedures is to better ensure that known or suspected members of terrorist organizations who have disqualifying factors do not receive firearms in violation of federal or state law. An additional benefit has been to support the nation’s war against terrorism. Thus, it is important that the maximum amount of allowable information from these background checks be consistently shared with counterterrorism officials. However, our work revealed that federal and state procedures for handling terrorism-related NICS transactions do not clearly address the specific types of information that can or should be routinely provided to counterterrorism officials or the sources from which such information can be obtained. For example, under current procedures, it is not clear if certain types of potentially useful information, such as the residence address of the prospective purchaser, can or should be routinely shared. Also, under current procedures, it is not clear if FBI and state personnel can routinely call a gun dealer or a law enforcement agency processing a permit application to obtain and provide counterterrorism officials with information not submitted as part of the initial NICS check. Further, some types of information—such as the specific location of the dealer from which the prospective purchaser attempted to obtain the firearm—have not consistently been shared with counterterrorism officials. Consistently sharing the maximum amount of allowable information could provide counterterrorism officials with valuable new information about individuals on terrorist watch lists. The FBI has plans that call for conducting audits every 3 years of the states’ handling of terrorism-related NICS transactions. However, given that these NICS background checks involve known or suspected terrorists who could pose homeland security risks, more frequent FBI oversight or centralized management is needed. The Attorney General and the FBI ultimately are responsible for managing NICS, and the FBI is a lead law enforcement agency responsible for combating terrorism. However, the FBI does not have aggregate data on the number of NICS transactions involving known or suspected members of terrorist organizations that have been approved or denied by state agencies to date. Also, the FBI has not assessed the extent to which the states have implemented and followed applicable procedures for handling terrorism-related NICS transactions. Moreover, under a 3-year audit cycle, relevant information from the background checks may have been destroyed pursuant to federal or state laws and therefore may not be available for review. Further, more frequent FBI oversight or centralized management would help address other types of issues we identified—such as several states’ delays in implementing procedures and one state’s mishandling of a terrorism- related NICS transaction. Proper management of NICS transactions with valid matches to terrorist watch list records is important. Thus, we recommend that the Attorney General (1) clarify procedures to ensure that the maximum amount of allowable information from these background checks is consistently shared with counterterrorism officials and (2) either implement more frequent monitoring by the FBI of applicable state agencies or have the FBI centrally manage all terrorism-related NICS background checks. We requested comments on a draft of this report from the Department of Justice. Also, we provided a draft of sections of this report for comment to applicable agencies in the 11 states we contacted. On January 7, 2005, Justice provided us written comments, which were signed by the Acting Assistant Director of the FBI’s Criminal Justice Information Services Division. According to Justice and FBI officials, the draft report was provided for review to Justice’s Office of Legal Policy, the FBI’s NICS Section (within the Criminal Justice Information Services Division), the FBI’s Counterterrorism Division, and the Terrorist Screening Center. Justice agreed with our two recommendations. Specifically, regarding our recommendation to clarify NICS procedures for sharing information from NICS transactions with counterterrorism officials, Justice stated that (1) the written procedures used by the FBI’s NICS Section will be revised and (2) additional written guidance should be provided to applicable state agencies. Regarding our recommendation for more frequent FBI oversight or centralized management of terrorism-related NICS background checks, Justice has requested that the FBI report to the department by the end of January 2005 on the feasibility of having the FBI’s NICS Section process all NICS transactions involving VGTOF records. In its written comments, Justice also provided (1) a detailed discussion of the Brady Act’s provisions relating to the retention and use of NICS information and (2) clarifications on the states’ handling of terrorism- related NICS transactions. These comments have been incorporated in this report where appropriate. The full text of Justice’s written comments is reprinted in appendix III. Officials from 7 of the 11 states we contacted told us they did not have any comments. Officials from the remaining 4 states did not respond to our request for comments. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to interested congressional committees and subcommittees. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report or wish to discuss the matter further, please contact me at (202) 512-8777 or ekstrandl@gao.gov, or my Assistant Director, Danny R. Burton, at (214) 777-5600 or burtond@gao.gov. Other key contributors to this report were Eric Erdman, Lindy Coe-Juell, David Alexander, Katherine Davis, and Geoffrey Hamilton. Our overall objective was to review how the Federal Bureau of Investigation’s (FBI) National Instant Criminal Background Check System (NICS) handles checks of prospective firearms purchasers that hit on and are confirmed to match terrorist watch list records. The FBI and designated state and local criminal justice agencies use NICS to determine whether or not individuals seeking to purchase firearms or apply for firearms permits are prohibited by law from receiving or possessing firearms. Specifically, we addressed the following questions: What terrorist watch lists are searched during NICS background checks? How many NICS transactions have resulted in valid matches with terrorist watch list records? For valid matches, what are federal and state procedures for sharing NICS-related information with federal counterterrorism officials? To what extent does the FBI monitor the states’ handling of NICS transactions with valid matches to terrorist watch list records? What issues, if any, have state agencies encountered in handling such transactions? Also, we obtained summary information on federal and state requirements for retaining information related to NICS transactions with valid matches to terrorist watch list records (see app. II). In performing our work, we reviewed applicable federal laws and regulations, FBI policies and procedures, and relevant statistics. We interviewed federal officials at and reviewed documentation obtained from the Department of Justice’s Office of Legal Policy; the FBI’s Counterterrorism Division; the FBI’s NICS Section and Criminal Justice Information Services Division at Clarksburg, West Virginia; and the Terrorist Screening Center (TSC), which is the multiagency center responsible for consolidating federal terrorist watch lists. Generally, our analyses focused on background checks processed by the FBI’s NICS Section and 11 states during the period February 3, 2004 (when the FBI’s procedures for handling terrorism-related NICS transactions became effective), through June 30, 2004. The 11 states we contacted (California, Colorado, Florida, Hawaii, Illinois, Massachusetts, North Carolina, Pennsylvania, Tennessee, Texas, and Virginia) were those that FBI data indicated—and the states subsequently confirmed—had processed NICS checks (during the period February 3 through June 30, 2004) that resulted in one or more valid matches with terrorist watch list records. To determine what terrorist watch list records are searched during NICS background checks, we interviewed officials from the FBI’s NICS Section and the Criminal Justice Information Services Division—the FBI division responsible for maintaining the Violent Gang and Terrorist Organization File (VGTOF)—and obtained relevant documentation. Also, we interviewed TSC officials and obtained documentation and other relevant information on TSC’s efforts to consolidate federal terrorist watch list records into a single database. Eligible records from TSC’s consolidated database are shared with VGTOF and searched during NICS background checks. To determine the number of NICS transactions that resulted in valid matches with terrorist records in VGTOF—during the period February 3 through June 30, 2004—we interviewed officials from the FBI’s NICS Section and reviewed FBI data. The FBI did not have comprehensive or conclusive information on transactions handled by state agencies, but FBI data indicated that 12 states (California, Colorado, Florida, Georgia, Hawaii, Illinois, Massachusetts, North Carolina, Pennsylvania, Tennessee, Texas, and Virginia) likely had processed one or more NICS transactions with a valid match to terrorist records in VGTOF during this period. We interviewed agency officials in the 12 states to corroborate the FBI data and to obtain additional information about the related background checks (e.g., whether the transactions were allowed to proceed or were denied). We also worked with officials from the FBI’s NICS Section and state agencies to resolve any inconsistencies. For example, our work revealed that 1 of the 12 states (Georgia) had not processed a terrorism-related NICS transaction during the period we reviewed. As such, our subsequent interviews and analysis focused on background checks processed by the FBI’s NICS Section and the remaining 11 states. To determine federal and state procedures for sharing NICS-related information with federal counterterrorism officials, we reviewed applicable federal laws and regulations, including the Brady Handgun Violence Prevention Act and NICS regulations. We also reviewed FBI and state procedures for handling NICS transactions involving terrorist records in VGTOF—procedures that were developed and disseminated under the Department of Justice’s direction. We interviewed officials from the Department of Justice’s Office of Legal Policy, the FBI’s NICS Section, and the 11 states to determine the scope and types of NICS-related information that could be shared with federal counterterrorism officials under applicable procedures. Further, for NICS transactions with valid matches to terrorist records in VGTOF—during the period February 3 through June 30, 2004—we interviewed officials from the FBI’s NICS Section and Counterterrorism Division, TSC, and the 11 states to determine the types of NICS-related information that were shared with counterterrorism officials. To determine the extent to which the FBI has monitored the states’ handling of NICS transactions involving VGTOF records, we interviewed officials from the Department of Justice’s Office of Legal Policy, the FBI’s NICS Section, and state agencies. We reviewed documents the FBI used to notify state agencies about the procedures for handling terrorism-related NICS transactions. We also reviewed data and other information the FBI maintained on transactions handled by the states. Further, we obtained information on the FBI’s plans to periodically audit whether designated state and local criminal justice agencies are utilizing the written procedures for processing NICS transactions involving VGTOF records. To identify issues state agencies have encountered in handling terrorism- related NICS transactions, we interviewed officials from the 11 states. For identified issues, we interviewed officials from the Department of Justice and the FBI’s NICS Section and Counterterrorism Division to discuss the states’ issues and obtain related information. To determine federal and state requirements for retaining information from terrorism-related NICS transactions, we interviewed officials from the FBI’s NICS Section and state agencies and reviewed applicable federal laws and regulations. We also reviewed a Department of Justice report that addressed the length of time the FBI and applicable state agencies retain information related to firearm background checks. Further, we interviewed officials from the FBI and reviewed relevant FBI documents to determine how the federal 24-hour destruction requirement for NICS records of allowed firearms transfers would affect the FBI’s NICS Section and state policies and procedures. We performed our work from April through December 2004 in accordance with generally accepted government auditing standards. We were unable to fully assess the reliability or accuracy of the data regarding valid matches with terrorist records in VGTOF because the data related to ongoing terrorism investigations. However, we discussed the sources of data with FBI, TSC, and state agency officials and worked with them to resolve any inconsistencies. We determined that the data were sufficiently reliable for the purposes of this review. The results of our interviews with officials in the 11 states may not be representative of the views and opinions of others nationwide. On July 21, 2004, the FBI’s NICS Section implemented a provision in federal law that requires any personal identifying information in the NICS database related to allowed firearms transfers to be destroyed within 24 hours after the FBI advises the gun dealer that the transfer may proceed. The law does not provide an exception for retaining information from NICS transactions with valid matches to terrorist records in VGTOF. Thus, information in the NICS database from such transactions also is subject to the federal 24-hour destruction provision. Before the 24-hour destruction provision took effect, federal regulations permitted the retention of all information related to allowed firearms transfers for up to 90 days. The federal 24-hour retention statute does not specifically address whether identifying information in the NICS database related to permit checks— which do not involve gun dealers—is subject to 24-hour destruction. According to the FBI’s NICS Section, the 24-hour destruction requirement does not apply to permit checks. Rather, information related to permit checks is maintained in the NICS database for up to 90 days after the background check is initiated. In implementing the 24-hour destruction provision, the FBI’s NICS Section revised its policies and procedures to allow for the retention of nonidentifying information related to each proceeded background check for up to 90 days (e.g., information about the gun dealer). According to the FBI, by retaining the nonidentifying information, the FBI’s NICS Section can initiate firearm retrieval actions when new information reveals that an individual who was approved to purchase a firearm should not have been. The nonidentifying information is retained for all NICS transactions that are allowed to proceed, including transactions involving subjects of terrorist watch lists. Also, in implementing the 24-hour destruction provision, the FBI’s NICS Section created a new internal classification system for transactions that are “open.” Specifically, if NICS staff cannot make a final determination (i.e., proceed or denied) on a transaction within 3 business days, the NICS Section is to automatically change the status to open. The NICS Section maintains personal identifying information and other details related to open transactions until either (1) a final determination on the transaction is reached or (2) the expiration of the retention period for open transactions, which is a period of no more than 90 days. Regarding terrorism-related NICS transactions, the open designation would be used, for example, if NICS Section staff did not receive responses from FBI field agents within 3 business days. The 24-hour destruction provision did not affect federal policies for retaining NICS records related to denied firearms transactions. Under provisions in NICS regulations, personal identifying information and other details related to denied firearms transactions are retained indefinitely. Also, according to Justice and FBI officials, there are no limitations on the retention of NICS information by TSC or counterterrorism officials, who received the information to verify identities and determine whether firearm-possession prohibitors exist. Among the states, requirements vary for retaining records of allowed transfers of firearms. Some states purge a firearm transaction record almost immediately after the firearm sale is approved, while other states retain such records for longer periods of time. Under NICS regulations, state records are not subject to the federal 24-hour destruction requirement if the records are part of a system created and maintained pursuant to independent state law. Thus, states with their own state law provisions may retain records of allowed firearms transfers for longer than 24 hours. The retention of state records related to denied firearms transactions varies. | Membership in a terrorist organization does not prohibit a person from owning a gun under current law. Thus, during presale screening of prospective firearms purchasers, the National Instant Criminal Background Check System historically did not utilize terrorist watch list records. However, for homeland security and other purposes, the Federal Bureau of Investigation (FBI) and applicable state agencies began receiving notices (effective February 3, 2004) when such screening involved watch lists records. GAO determined (1) how many checks have resulted in valid matches with terrorist watch list records, (2) procedures for providing federal counterterrorism officials relevant information from valid-match background checks, and (3) the extent to which the FBI monitors or audits the states' handling of such checks. During the period GAO reviewed--February 3 through June 30, 2004--a total of 44 firearm-related background checks handled by the FBI and applicable state agencies resulted in valid matches with terrorist watch list records. Of this total, 35 transactions were allowed to proceed because the background checks found no prohibiting information, such as felony convictions, illegal immigrant status, or other disqualifying factors. Federal and state procedures--developed and disseminated under the Department of Justice's direction--do not address the specific types of information from valid-match background checks that can or should be provided to federal counterterrorism officials or the sources from which such information can be obtained. Justice officials told GAO that information from the background check system is not to be used for general law enforcement purposes but can be shared with law enforcement agents or other government agencies in the legitimate pursuit of establishing a match between the prospective gun buyer and a terrorist watch list record and in the search for information that could prohibit the firearm transfer. Most state agency personnel GAO contacted were not aware of any restrictions or limitations on providing valid-match information to counterterrorism officials. FBI counterterrorism officials told GAO that routinely receiving all available personal identifying information and other details from valid-match background checks could be useful in conducting investigations. As part of routine audits the FBI conducts every 3 years, the Bureau plans to assess the states' handling of firearm-related background checks involving terrorist watch list records. However, given that these background checks involve known or suspected terrorists who could pose homeland security risks, more frequent FBI oversight or centralized management would help ensure that suspected terrorists who have disqualifying factors do not obtain firearms in violation of the law. The Attorney General and the FBI ultimately are responsible for managing the background check system, although they have yet to assess the states' compliance with applicable procedures for handling terrorism-related checks. Also, more frequent FBI oversight or centralized management would help address other types of issues GAO identified--such as several states' delays in implementing procedures and one state's mishandling of a terrorism-related background check. |
The 13th Congressional District of Florida comprises DeSoto, Hardee, Sarasota, and parts of Charlotte and Manatee Counties. In the November 2006 general election, there were two candidates in the race to represent the 13th Congressional District: Vern Buchanan, the Republican candidate, and Christine Jennings, the Democratic candidate. The State of Florida certified Vern Buchanan the winner of the election. The margin of victory was 369 votes out of a total of 238,249 votes counted. Table 1 summarizes the results of the election and shows that the results from Sarasota County exhibited a significantly higher undervote rate than in the other counties in the congressional district. In Florida, the Division of Elections in the Secretary of State’s office helps the Secretary carry out his or her responsibilities as the chief election officer. The Division of Elections is responsible for establishing rules governing the use of voting systems in Florida. Voting systems cannot be used in any county in Florida until the Florida Division of Elections has issued a certification of the voting system’s compliance with the Florida Voting System Standards. The Florida Voting Systems Certification program is administered by the Bureau of Voting Systems Certification in the Division of Elections. An elected supervisor of elections is responsible for implementing elections in each county in Florida in accordance with Florida election laws and rules. The supervisor of elections is responsible for the purchase and maintenance of the voting systems as well the preparation and use of the voting systems to conduct each election. In the 2006 general election, Sarasota County used voting systems manufactured by ES&S. The State of Florida has certified different versions of ES&S voting systems. The version used in Sarasota County was designated ES&S Voting System Release 4.5, Version 2, Revision 2, and consisted of iVotronic DREs, a Model 650 central count optical scan tabulator for absentee ballots, and the Unity election management system. It was certified by the State of Florida on July 17, 2006. The certified system includes different configurations and optional elements, several of which were not used in Sarasota County. The election management part of the voting system is called Unity; the version that was used was 2.4.4.2. Figure 1 shows the overall election operation using the Unity election management system and the iVotronic DRE. Sarasota County used iVotronic DREs for early and election day voting. Specifically, Sarasota County used the 12-inch iVotronic DRE, hardware version 1.1 with firmware version 8.0.1.2. Some of the iVotronic DREs are configured with Americans with Disabilities Act (ADA) functionality, which includes the use of audio ballots. The iVotronic DRE uses a touch screen—a pressure-sensitive graphics display panel—to display and record votes (see fig. 2). The machine has a storage case that also serves as the voting booth. The operation of the iVotronic DRE requires using a personalized electronic ballot (PEB), which is a storage device with an infrared window used for transmission of ballot data to and from the iVotronic DRE. The iVotronic DRE has four independent flash memory modules, one of which contains the program code—firmware—that runs the machine and the remaining three flash memory modules store redundant copies of ballot definitions, machine configuration information, ballots cast by voters, and event logs. The iVotronic DRE includes a VOTE button that the voter has to press to cast a ballot and record the information in the flash memory. The iVotronic DRE also includes a compact flash card that can be used to load sound files onto iVotronic DREs with ADA functionality. The iVotronic DRE’s firmware can be updated through the compact flash card. Additionally, at the end of polling, the ballots and audit information are to be copied from the internal flash memory module to the compact flash card. To use the iVotronic DRE for voting, a poll worker activates the iVotronic DRE by inserting a PEB into the PEB slot after the voter has signed in at the polling place. After the poll worker makes selections so that the appropriate ballot will appear, the PEB is removed and the voter is ready to begin using the system. The ballot is presented to the voter in a series of display screens, with candidate information on the left side of the screen and selection boxes on the right side (see fig. 3). The voter can make a selection by touching anywhere on the line, and the iVotronic DRE responds by highlighting the entire line and displaying an X in the box next to the candidate’s name. The voter can also change his or her selection by touching the line corresponding to another candidate or by deselecting his or her choice. “Previous Page” and “Next Page” buttons are used to navigate the multipage ballot. After completing all selections, the voter is presented with a summary screen with all of his or her selections (see fig. 4). From the summary screen, the voter can change any selection by selecting the race. The race will be displayed to the voter on its own ballot page. When the voter is satisfied with the selections and has reached the final summary screen, the red VOTE button is illuminated, indicating the voter can now cast his or her ballot. When the VOTE button is pressed, the voting session is complete and the ballot is recorded on the iVotronic DRE. In Sarasota County’s 2006 general election, there were nine different ballot styles with between 28 and 40 races, which required between 15 and 21 electronic ballot pages to display, and 3 to 4 summary pages for review purposes. Our analysis of the 2006 general election data from Sarasota County does not identify any particular voting machines or machine characteristics that could have caused the large undervote in Florida’s 13th Congressional District race. The undervotes in Sarasota County for the congressional race were generally distributed across all machines and precincts. Using voting system data that we obtained from Sarasota County, we found that 1,499 iVotronic DREs recorded votes in the 2006 general election; 84 iVotronic DREs recorded votes during early voting, and 1,415 iVotronic DREs recorded votes on election day. Using these data, we verified that the vote counts for the contestant, contestee, and undervotes match the reported vote totals for Sarasota County in Florida’s 13th Congressional District race. As can be seen in table 2, the undervote rate in early voting was significantly higher than in election day voting. The range of the undervote rate for all machines was between 0 and 49 percent, with an average undervote rate of 14.3 percent. When just the early voting machines are considered, the undervote rate ranged between 5 and 28 percent. The largest number of undervotes cast on any one machine on election day was 39. While the range of ballots cast on any one machine on election day was between 1 and 121, the median number of ballots cast on any one machine was 66. The range of undervote rate by precinct was between 0 and 41 percent, and the average undervote by precinct was about 14.8 percent. Prior to the elections, Sarasota County’s voting systems were subjected to several different tests that included testing by the manufacturer, certification testing by the Florida Division of Elections, testing by independent testing authorities, and logic and accuracy testing by Sarasota County’s Supervisor of Elections. After the 2006 general election, an audit of Sarasota County’s election was conducted by the State of Florida that included a review of the iVotronic source code, parallel tests, and an examination of Sarasota County’s election procedures. Although these tests and reviews provide some assurance, as do certain controls that were in place during the election, that the voting systems in Sarasota County functioned correctly, they do not provide reasonable assurance that the iVotronic DREs did not contribute to the undervote. According to ES&S officials, ES&S tested the version of the iVotronic DRE that was used in Sarasota County in 2001-2002, but they could not provide us documentation for those tests because the documentation had not been retained. The Florida Division of Elections conducted certification testing of the iVotronic DRE and the Unity election management system before Sarasota County acquired the system from the manufacturer. The certification process included tests of the election management system and the conduct of mock primary and general elections on the entire voting system. ES&S Voting System, Release 4.5, Version 2, Revision 2, was certified by the Florida Division of Elections on July 17, 2006. According to Florida Division of Elections officials, testing of each version focuses on the new components, and components that were included in prior versions are not as vigorously tested. The 8.0.1.2 version of the iVotronic firmware was first tested as a part of ES&S Release 4.5, Version 1, which was certified in 2005. Version 2 introduced version 2.4.4.2 of the Unity Election Management System, which was certified in August 2005. Certification testing was conducted on software that was received from an independent test authority, who witnessed the building of the firmware from the source code. An independent test authority also conducted environmental testing of the iVotronic DRE in 2001 that was relied upon by the Florida Division of Elections for certification. A logic and accuracy test was conducted by Sarasota County on October 20, 2006, on 32 iVotronic DREs, and it successfully verified that all ballot positions on all nine ballot styles could be properly recorded. In addition, the use of a provisional ballot and audio ballot were tested, as well as machines configured for early voting with all nine ballot styles. After the 2006 general election, the Florida Division of Elections conducted an audit of Sarasota County’s 2006 general election that included two parallel tests, an examination of the certified voting system and conduct of election by Sarasota County’s elections office, and an independent review of the iVotronic DRE firmware’s source code. After the conduct of this audit, the audit team concluded that there was no evidence that suggested the official election results were in error or that the voting systems contributed to the undervote in Sarasota County. The parallel tests were performed using 10 iVotronic DREs—5 used in the 2006 general election and 5 that were not used. Four of the machines in each test replicated the votes cast on four election day iVotronic DREs. The fifth machine in each test used an ad hoc test script that involved picking a random vote pattern along with a specific vote selection pattern picked from 10 predetermined vote patterns for the 13th Congressional District for each ballot cast. The audit report asserts that testing a total of 10 machines is more than adequate to identify any machine problems or irregularities that could have contributed to undervotes in the Florida-13 race. However, we concluded that the results from the testing of 10 machines cannot be applied to all 1,499 iVotronic DREs used during the 2006 general election because the sample was not random and the sample size was too small. In examining whether voting systems that were used in Sarasota County matched the systems that were certified by the Florida Division of Elections, the Florida audit team examined the Unity election management system and the firmware installed on six iVotronic DREs. The audit team confirmed that the software running on the Unity election management system and the firmware in the six iVotronic DREs matched the certified versions held in escrow by the Florida Division of Elections. On the basis of its review, the audit team concluded that there is no evidence to indicate that the iVotronic DREs had been compromised or changed. We agree that the test verifies that those six machines were not changed, but any extrapolation beyond this cannot be statistically justified because the size of the sample is too small. Therefore, these tests cannot be used to obtain reasonable assurance that the 1,499 machines used in the general election used the certified firmware. A software review and security analysis of the iVotronic firmware version 8.0.1.2 was conducted by a team led by Florida State University’s SAIT Laboratory. The eight experts in the software review team attempted to confirm or refute many different hypotheses that, if true, might explain the undervote in the race for the 13th Congressional District. In doing so, they made several observations about the code, which we were able to independently verify. The software review and our verification of the observations were helpful, but a key shortcoming was the lack of assurance whether the source code reviewed by the SAIT team or by us, if compiled, would correspond to the iVotronic firmware that was used in Sarasota County for the 2006 election. According to ES&S and Florida Division of Elections officials, in May 2005 an independent testing authority witnessed the process of compiling the source code and building the version of firmware that was eventually certified by the Florida Division of Elections. According to ES&S officials, if necessary, ES&S can recreate the firmware from the source code, but the firmware would not be exactly identical to the firmware certified by the Florida Division of Elections because the embedded date and time stamp in the firmware would be different. The software review team also looked for security vulnerabilities in software that could have been exploited to cause the undervote. Although the team found several software vulnerabilities, the team concluded that none of them were exploited in Sarasota in a way that would have contributed to the undervote. We did not independently verify the team’s conclusion. The Unity election management system and the iVotronic DREs are the major voting system components that may require testing to determine whether they contributed to the large undervote in Sarasota County. Our review of tests already conducted and documentation from the election provide us reasonable assurance that the key functions of the Unity election management system—election definition and vote tabulation— did not contribute to the undervote. The election definitions created using the Unity election management system are tested during logic and accuracy testing to demonstrate that they include all races, candidates, and issues and that each of the items can be selected by a voter. The votes tabulated on the iVotronic DRE at each precinct matched the data uploaded to the Unity election management system, and the totals from the precinct results tapes agree with that obtained by Unity. Further, the state audit confirmed that the Unity election management system software running in Sarasota County matched the escrowed version certified by the Florida Division of Elections. We have reasonable assurance that the number of ballots recorded by the iVotronic DREs is correct because this number is very close to the number of people recorded on the precinct registers as showing up at the polling places to vote either during early voting or on election day. This assurance also allows us to conclude that issues, such as votes cast by “fleeing voters”—votes that are cast by poll workers for voters who leave the polling place before pressing the button to cast the vote—and the potential loss of votes during a system shutdown, did not affect the undervote in this election. If these issues had occurred, they would have caused a discrepancy between the number of voters who sign in at the polling place to vote and the public counts recorded on the iVotronic DREs. We have reasonable assurance that provisional ballots were appropriately handled by the iVotronic DREs and the Unity election management system. We also verified that during the Florida certification test process, the Division of Elections relied on successful environmental and shock testing conducted by an independent test authority. We found that prior testing and activities do not provide reasonable assurance that all iVotronic DREs used in Sarasota County on election day were using the hardware and firmware certified for use by the Florida Division of Elections. Sarasota County has records indicating that only certified versions were procured from ES&S, and the firmware version is checked in an election on the zero and results tapes. However, because there was no independent validation of the system versions, we cannot conclude that no modifications were made to the systems that would have likely made them inconsistent with the certified version. As we previously mentioned, the firmware comparison of only 6 iVotronic DREs in the state audit is insufficient to support generalization to all 1,499 iVotronic DREs that recorded votes during the election. Without reasonable assurance that all iVotronic DREs are running the same certified firmware, it is difficult for us to rely on the results of other testing that has been conducted, such as the parallel tests or the logic and accuracy tests. Prior testing of the iVotronic DREs only verified 13 of the 112 ways that we identified that a voter may use to select a candidate in Florida’s 13th Congressional District race. Specifically, on an iVotronic DRE, a voter could (1) initially select either candidate or neither candidate (i.e. undervote), (2) change the vote on the initial screen, and (3) use a combination of page back and review screen options to change or verify his or her selection before casting the ballot. By taking into account these variations, our analysis has found at least 112 different ways a voter could make his or her selection in Florida’s 13th Congressional District race, assuming that it was the only race on the ballot. Out of 112 different ways to select a candidate in the congressional race, Florida certification tests and the Sarasota County logic and accuracy tests verified 3 ways to select a candidate; and the Florida parallel tests verified 10 ways to select a candidate—meaning that of the 112 ways, 13 have been tested. By not verifying these different ways to select a candidate, we do not have reasonable assurance that the system will properly handle expected forms of voter behavior. During the setup of the iVotronic DRE, sometimes referred to as the clear and test process, the touch screens are calibrated by using a stylus to touch the screen at 20 different locations. The calibration process is designed to align the display screen with the touch screen input. It has been reported that a miscalibrated machine could affect the selection process by highlighting a candidate that is not aligned with what the voter selected. We identified two reported cases on election day where the miscalibration of the iVotronic DRE led to its closure and discontinued use for the rest of the day. While a miscalibrated machine could certainly make an iVotronic DRE harder to use, it is not clear it would have helped to contribute to the undervote. We did not identify any prior testing or activities that would help us understand the effect of a miscalibrated iVotronic DRE on the undervote. On the basis of our analysis of all prior test and audit activities, we propose that a firmware verification test, a ballot test, and a calibration test be conducted to try to obtain increased assurance that the iVotronic DREs used in Sarasota County during the 2006 general election did not cause the undervote. We propose that the firmware verification testing be started first, once the necessary arrangements have been made, such as access to the needed machines and the development of test protocols and detailed test procedures. Once we have reasonable assurance that the iVotronic DREs are running the same certified firmware, we could conduct the ballot test and calibration test on a small number of machines to determine whether it is likely the machines accurately recorded and counted the ballots. If the firmware verification tests are successfully conducted, we would have much more confidence that the iVotronic DREs will behave similarly when tested. If there are differences in the firmware running on the iVotronic DREs, we would need to reassess the number of machines that need to be tested for ballot testing and calibration testing in order for us to have confidence that the test results would be true for all 1,499 iVotronic DREs used during the election. In other words, if we are reasonably confident that the same software is used in all 1,499 machines, then we are more confident that the results of the other tests on a small number of machines can be used to obtain increased assurance that the iVotronic DREs did not cause the undervote. Although the proposed tests would provide increased assurance, they would not conclusively eliminate the machines as a cause of the undervote. We propose to conduct a firmware verification test using a statistical sampling approach that can provide reasonable assurance that all 1,499 iVotronic DREs are running the certified version of firmware. The exact number of machines that would be tested depends on the confidence level desired and how much error can be tolerated. We propose drawing a representative sample from all the iVotronic DREs that recorded votes in the general election. With a sample size of 115 iVotronic DREs, which would be divided between sequestered and nonsequestered machines, and assuming that there are no test failures, we would be able to conclude with a 99 percent confidence level that no more than 4 percent of the 1,499 iVotronic DREs used in the election were using uncertified firmware. We suggest a test approach similar to what was used by the Florida Division of Elections when it verified the firmware for 6 iVotronic DREs. We estimate that the firmware testing for 115 machines could be conducted in about 5 to 7 days and would require about 5 or 6 people, once the necessary arrangements have been made. The machines would be transported to a test facility specified by Sarasota County election officials where we could perform the test. The activities involved in conducting a firmware validation test would include locating and retrieving the selected iVotronic DRE from the storage facility, transporting it to the test facility, opening the DRE, extracting the chip with the firmware, reading the contents of the chip using a specialized chip reader, and conducting a comparison between the contents and the certified firmware to determine if any differences exist. To conduct this test, we would need commercially available specialized hardware and software similar to that used by the Florida Division of Elections in its firmware comparison test. We propose conducting ballot testing on 10 iVotronic DREs, each configured with one of the nine different ballot styles, with the 10th machine configured as an early voting machine with all nine ballot styles. We would test 112 ways to select a candidate on the early voting machine. On the election day machines, we would test the 112 different ways distributed across the 9 machines in a random manner, meaning each machine would on average record 12-13 ballots. Assuming that (1) reasonable assurance is obtained that all iVotronic DREs used during the election were using the same certified firmware, and (2) we found no failures during the ballot testing, this testing would provide increased assurance that the iVotronic DREs used during the election, both in early voting and in election day voting, were able to accurately record and count ballots when using any of the 112 ways to select a candidate in the Florida-13 race. We would plan to code each ballot by including an identifier in the write-in candidate field for either the U.S. senator or governor’s race. Using this write-in coding, we could examine the ballot image and confirm that each ballot was accurately recorded and counted by the iVotronic DRE. Any encountered failures would also be more rapidly attributed to a specific test case, and we would be able to more readily repeat the test case to determine if we have a repeatable condition. Testing 112 ways to select a candidate on a single machine would also provide us some additional assurance that the volume of ballots cast on election day did not cause a problem. We note that casting 112 ballots on a single machine is more than that cast on over 99 percent of the 1,415 machines used on election day. We estimate the ballot testing would take about 2 to 3 days and require the equivalent of 2 people, once the necessary arrangements have been made. Because little is known about the effect of a miscalibrated machine on the behavior of an iVotronic DRE, we propose to deliberately miscalibrate an iVotronic DREs and verify the functioning of the machine. We propose to identify different ways to miscalibrate a ballot and to test ballots on the miscalibrated iVotronic DRE to verify that it still properly records votes. With this test we would confirm whether (1) the review screen displays the same selection in the Florida-13 race as was highlighted in the selection screen, and (2) that the vote is recorded as it was displayed on the review screen. Again, we would plan to use the write-in candidate option to verify the proper recording of the ballot. This test would demonstrate whether the system correctly records a vote for the race and hence whether it contributed to the undervote. We estimate that the calibration test could be completed in about 1 day by 2 people, once the necessary arrangements have been made. Should the task force ask us to conduct the proposed testing, we want to make the task force aware of several other matters that would need to be addressed before we could begin testing. These activities would require some time and resources to complete before testing could commence. First, we would need to gain access to iVotronic DREs that have been subject to a sequestration order in the state court system of Florida. If we do not have access to the needed machines, we would be unable to obtain reasonable assurance that the machines used on election day were using certified software, and without this assurance, the results from prior tests and any results of our ballot and calibration tests would be less meaningful because we would be unable to apply the results to all 1,499 iVotronic DREs used during the election. Second, we would need to agree upon an appropriate facility for the tests. Sarasota County Supervisor of Elections has indicated that we can use its warehouse space, but because of upcoming elections in November and January, the only time the election officials would be able to provide us this space and the necessary support is between November 26 and December 7, 2007. If testing cannot be completed during this time period, Sarasota County officials stated that they would not be able to assist us until February 2008. Third, some tests may require commercially available specialized software, hardware, or other tools to conduct the tests. We would need to make arrangements to either borrow or to purchase such testing tools before commencing testing. Fourth, in order to conduct any tests, we would need to develop test protocols and detailed test procedures and steps. We also anticipate that we would need to conduct a dry run, or dress rehearsal, of our test procedures to ensure that our test tools function properly and that our time estimates are reasonable. Finally, we would need to make arrangements for video recording of our testing. It would be our preference to have a visual record of the tests to document the actual test conduct and to facilitate certain types of test analysis. We recognize that human interaction with the ballot layout could be a potential cause of the undervote. Although we have not explored this issue in our review, we note that there is an ongoing academic study that is exploring this issue using voting machines obtained from ES&S. We believe that such experiments could be useful and could provide insight into the ballot layout issue. During our review, we noted that several suggestions have been offered as possible ways to establish that voters are intentionally undervoting and to provide some assurance that the voting systems did not cause the undervote. First, a voter-verified paper trail could provide an independent confirmation that the touch screen voting systems did not malfunction in recording and counting the votes from the election. The paper trail would reflect the voter’s selections and, if necessary, could be used in the counting or recounting of votes. This issue is recognized in the Florida State University SAIT source code review as well as the 2005 and draft 2007 Voluntary Voting Systems Guidelines prepared for the Election Assistance Commission. We have previously reported on the need to implement such a function properly. Second, explicit feedback to voters that a race has been undervoted and a prompt for voters to affirm their intent to undervote might help prevent many voters from unintentionally undervoting a race. On the iVotronic DREs, such feedback and prompts are provided only when the voter attempts to cast a completely blank ballot, but not when a voter undervotes in individual races. Third, offering a “none of the above” option in a race would provide voters with the opportunity to indicate that they are intentionally undervoting. The State of Nevada provides this option in certain races in its elections. Decisions about these or other suggestions about ballot layout or voting system functions should be informed by human factors studies that assess their effectiveness in accurately recording voters’ preferences, making voting systems easier to use, and preventing unintentional undervotes. The high undervote encountered in Sarasota County in the 2006 election for Florida’s 13th Congressional District has raised questions about whether the voting systems accurately recorded and counted the votes cast by eligible voters. Other possible reasons for the undervote could be that voters intentionally undervoted or voters did not properly cast their ballots on the voting systems, potentially because of issues relating to the interaction between voters and the ballot. The focus of our review has been to determine whether the voting systems—the iVotronic DREs, in particular—contributed to the undervote. We found that the prior reviews of Sarasota County’s 2006 general election have provided valuable information about the voting systems. Our review found that in some cases we were able to rely on this information to eliminate areas of concern. This allowed us to identify the areas where increased assurances were needed to answer the questions being raised. Accordingly, the primary focus of the tests we are proposing is to obtain increased assurance that the results of the prior reviews and our proposed testing can be applied to all the iVotronic DREs used in the election. Our proposed tests involving the firmware comparison, ballot testing, and calibration testing could help reduce the possibility that the undervote was caused by the iVotronic DREs. However, even after completing the tests, we would not have absolute assurance that the iVotronic DREs did not play any role in the large undervote. Absolute assurance is impossible to achieve because we are unable to recreate the conditions of the election in which the undervote occurred. By successfully conducting the proposed tests, we could reduce the possibility that the iVotronic DREs were the cause of the undervote and shift attention to the possibilities that the undervote was the result of intentional actions by the voter or voters that did not properly cast their votes on the voting system. We provided draft copies of this statement to the Secretary of State of Florida, the Supervisor of Elections of Sarasota County, and ES&S for review and comment. The Florida Department of State provided technical comments, which we incorporated. The Sarasota County Supervisor of Elections appreciated the opportunity to review the draft, but provided us no comments. In its comments, ES&S stated that it believes that the collective results of testing already conducted on the Sarasota County voting systems have demonstrated that they performed properly and as they were designed to function and that all votes were accurately captured and counted as cast in Florida’s 13th Congressional District race. Further, ES&S asserts that tests and analyses should be conducted to examine the effect of the ballot display on the undervote, which it believes is the most probable cause of the undervote. We disagree that the collective results of testing already conducted on the Sarasota County voting systems adequately demonstrate that the voting systems could not have contributed to the undervote in the Florida-13 race. First, as we have cited, we do not have adequate assurance that all the iVotronic DREs used in Sarasota County used the firmware certified by the Florida Division of Elections. Without this assurance, it is difficult for us to apply the results from the other tests to all 1,499 machines that recorded votes during the election because we are uncertain that all machines would have behaved in a similar manner. Further, we believe that expected forms of voter behavior to select a candidate in the Florida-13 race were not thoroughly tested. While ES&S asserts that such processes would have no effect on the iVotronic DRE’s ability to capture and record a voter’s selection, we did not identify testing that verified this. Further, while ES&S states that the testing of a deliberately miscalibrated iVotronic DRE would result in a clearly visible indication of which candidate was selected, we could not identify any testing that demonstrated this. We acknowledge that the large undervote in Florida’s 13th Congressional District race could have been caused by voters who intentionally undervoted or voters who did not properly cast their ballots, potentially because of issues related to the human interaction with the ballot. However, the focus of our review, as agreed with the task force, was to review whether the voting systems could have contributed to the large undervote. ES&S also provided technical comments, which we incorporated as appropriate. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other members of the task force may have at this time. For further information about this statement, please contact Keith Rhodes, Chief Technologist, at (202) 512-6412 or rhodesk@gao.gov, or Naba Barkakati at (202) 512-4499 or barkakatin@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other key contributors to this statement include James Ashley, James Fields, Jason Fong, Cynthia Grant, Geoffrey Hamilton, Richard Hung, John C. Martin, Jan Montgomery, Jennifer Popovic, Sidney Schwartz, and Daniel Wexler. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In November 2006, about 18,000 undervotes were reported in Sarasota County in the race for Florida's 13th Congressional District (FL-13). After the contesting of the election results in the House of Representatives, the task force unanimously voted to seek GAO's assistance in determining whether the voting systems contributed to the large undervote in Sarasota County. GAO agreed with the task force on an engagement plan, including the following review objectives: (1) What voting systems were used in Sarasota County and what processes governed their use? (2) What was the scope of the undervote in Sarasota County in the general election? (3) What tests were conducted on the voting systems in Sarasota County prior to the general election and what were the results of those tests? (4) Considering the voting systems tests conducted after the general election, are additional tests needed to determine whether the voting systems contributed to the undervote? To conduct its work, GAO met with officials from the State of Florida, Sarasota County, and Election Systems and Software (ES&S)--the voting systems manufacturer--and reviewed voting systems test documentation. GAO analyzed election data to characterize the undervote. On the basis of its assessments of prior testing and other activities, GAO identified potential additional tests for the Sarasota County voting systems. In the 2006 general election, Sarasota County used voting systems manufactured by ES&S, specifically iVotronic direct recording electronic (DRE) voting systems during early and election day voting and the Unity election management system, which handles the election administration functions, such as ballot design and election reporting. GAO's analysis of the 2006 general election data from Sarasota County did not identify any particular voting machines or machine characteristics that could have caused the large undervote in the FL-13 race. The undervotes in Sarasota County were generally distributed across all machines and precincts. GAO's analysis found that some of the prior tests and reviews conducted by the State of Florida and Sarasota County provide assurance that certain components of the voting systems in Sarasota County functioned correctly, but they are not enough to provide reasonable assurance that the iVotronic DREs did not contribute to the undervote. Specifically, GAO found that assurance is lacking in three areas, and proposes that tests be conducted to address those areas. First, because there is insufficient assurance that the firmware in all the iVotronic DREs used in the election matched the certified version held by the Florida Division of Elections, GAO proposes that a firmware verification test be conducted on a representative sample of 115 (of the 1,499) machines that were used in the general election. Second, because an insufficient number of ways to select a candidate in the FL-13 race were tested, GAO proposes that a test be conducted to verify all 112 ways that GAO identified to select a candidate. Third, because no prior tests were identified that address the effect of a miscalibrated iVotronic DRE on the undervote, GAO proposes that an iVotronic DRE be deliberately miscalibrated to verify the accurate recording of ballots under these conditions. GAO expects these three tests would take 2 weeks, once the necessary arrangements are made. Should the task force ask GAO to conduct the proposed tests, several matters would need to be addressed before testing could begin, including obtaining access to the iVotronic DREs that have been subject to a sequestration order, arranging for a test site, obtaining some commercially available test tools, developing test protocols and detailed test procedures, and arranging for the video recording of the tests. Sarasota County election officials have indicated that they can help GAO access the machines and provide a test site between November 26 and December 7, 2007. Although the proposed tests could help provide increased assurance, they would not provide absolute assurance that the iVotronic DREs did not cause the large undervote in Sarasota County. The successful conduct of the proposed tests could reduce the possibility that the voting systems caused the undervote and shift attention to the possibilities that the undervote was the result of intentional actions by voters or voters that did not properly cast their votes on the voting system. |
The semiconductor manufacturing equipment and materials industry produces a variety of equipment, chemicals, gases, films, and other materials critical to manufacturing integrated circuits. According to Semiconductor Equipment and Materials International (SEMI), the $86 billion global semiconductor manufacturing equipment and materials industry provides the equipment necessary for a $256 billion semiconductor manufacturing industry (see fig. 1). This industry in turn produces the computer chips needed by many other industries, including a $1.6 trillion electronics industry. Semiconductors are devices that enable computers and other products such as cell phones to process and store information. Producing semiconductors is a multistep sequence of photographic and chemical processes during which electronic circuits are gradually created on a wafer made of pure semiconducting material, most commonly silicon. For example, the equipment used to manufacture semiconductors performs tasks such as depositing a thin chemical film on wafers, and selectively removing the film by etching it away, creating microscopic transistors. The technological complexity of semiconductors is indicated by the feature size (the density of the etched lines) on the wafer. Smaller feature sizes measured in nanometers allow for more components to be integrated on a single semiconductor, thus creating more powerful semiconductors. Each reduction in feature size—for example, from 90 nanometers to 65 nanometers—is considered a move to a greater level of technological sophistication, or a move to the next “generation” of manufacturing technology. Consistent with multilateral export controls, the U.S. government classifies semiconductor manufacturing equipment and materials as dual-use items because they have both commercial and military uses. Under the authority granted in the Export Administration Act of 1979, Commerce’s Bureau of Industry and Security (BIS) administers export controls for dual-use items through the requirements contained in the Export Administration Regulations (EAR). Under these regulations, exporters are to either obtain prior government authorization from BIS in the form of a license, general authorization, or license exception, or determine that a license is not needed before exporting dual-use items. The EAR establishes a framework for regulating the export of dual-use items by identifying the characteristics and capabilities of items that may require export licenses. These characteristics and capabilities are contained in the Commerce Control List, which provides detailed specifications for about 2,400 dual- use items, divided into 10 categories (see app. II for a list of the 10 categories). Each category is subdivided into five groups designated by letters A through E (see app. II for a list of the five groups). For example, semiconductors and semiconductor manufacturing equipment and materials fall under Category 3 (electronics), with manufacturing equipment placed in Category 3B (test, inspections, and production equipment) and materials placed in Category 3C (materials). Exports of semiconductor manufacturing equipment and materials to China are primarily controlled for national security and antiterrorism reasons. Appendix III describes the specific equipment and materials that require a license for export to China. For these items, the overall policy of the United States is to approve exports for civilian end uses but generally to deny exports that will make a direct and significant contribution to Chinese military capabilities. Semiconductor manufacturing equipment and materials, as well as other sensitive dual-use items, also are controlled under the multilateral Wassenaar Arrangement. Forty countries are signatories to the arrangement, including Germany, Japan, The Netherlands, and the United States. Formed in 1996, the Wassenaar Arrangement succeeded the Coordinating Committee for Multilateral Export Controls. Most advanced semiconductor equipment and materials are included on the Wassenaar Arrangement’s Basic List, which controls items that are “major or key elements for indigenous production, use, or enhancement of military capabilities.” One type of equipment, metal organic chemical vapor deposition (MOCVD) reactors, which may be used to produce radiation- hardened electronics for use in commercial and military applications, is included on the Wassenaar Arrangement’s Sensitive List. No semiconductor equipment or materials are included on Wassenaar’s Very Sensitive List. One of the arrangement’s principal goals is to prevent “destabilizing accumulations” of advanced dual-use items and technologies through the reporting of export information by its members. The Wassenaar Arrangement lacks a “no undercut” rule, under which a member would agree not to permit the export of any listed item or items that had been officially denied an export license by another member. Rather than having a no undercut rule, Wassenaar members exchange information on denied transactions as the sole means of trying to achieve its goals. Although the United States may not authorize a license for a specific piece of semiconductor manufacturing equipment to China, other Wassenaar members are not restricted from selling that same item themselves. Since 2002, China’s ability to produce commercial semiconductors has steadily advanced but remains approximately one generation behind the United States. As of July 2008, China’s most advanced semiconductor manufacturing company can produce integrated circuits with a feature size of 65 nanometers, compared with U.S. companies that are producing semiconductors with 45-nanometer feature sizes. China’s ability to produce advanced integrated circuits continues to depend on whether it can obtain equipment, manufacturing technology, and materials from other countries. However, China has begun developing an indigenous capacity to build some types of advanced semiconductor manufacturing equipment, which may enable it to reduce its dependence on some foreign-sourced equipment. Lastly, China also can obtain semiconductor manufacturing equipment on the used market. Although this equipment cannot be used for state-of-the-art production, it nonetheless contributes to China’s production capacity. In 2002, we reported that, between 1986 and 2001, China had narrowed its technology gap from a span of five generations (or 10 years) behind U.S. commercial state-of-the-art production to approximately one generation (1 to 2 years). Since 2002, commercial state-of-the-art production has continued to advance. Companies in the United States now produce integrated circuits with a feature size of 45 nanometers, while the most advanced company in China is producing integrated circuits with a feature size of 65 nanometers—approximately one generation apart. Companies in China produce different types of integrated circuits, including microprocessors and various types of memory. Figure 2 shows the advances made by companies in the United States and China from 1994 through 2007. Although companies in China are capable of producing near-state-of-the-art commercial integrated circuits, they mainly produce integrated circuits that are several generations old and are used for internal consumption in China’s consumer electronics industry. These integrated circuits are used in products such as cell phones, TVs, DVD players, and personal computers. In 2006, SEMI estimated that 82 percent of China’s production capacity produces integrated circuits that are four to seven generations (ranging from 500 to 180 nanometers) behind state-of-the-art manufacturing, and approximately 7 percent of integrated circuits produced are one or less generations (90 nanometers and below) behind current state-of-the-art capabilities. Commercial Chinese semiconductor manufacturing companies are still largely dependent on foreign sources of equipment and materials to fuel technology advances. Currently, European, Japanese, and U.S. companies are the leading suppliers of semiconductor manufacturing equipment for facilities producing advanced integrated circuits. Additionally, to advance its technological capacity, China has partnered with foreign companies or created incentives for foreign companies to locate to China. China continues to rely on imports of semiconductor manufacturing equipment from Europe, Japan, and the United States for production of advanced integrated circuits. The demand is being driven by several advanced integrated circuit manufacturers, including Semiconductor Manufacturing International Corporation (SMIC), Hua Hong NEC, and Hynix-ST. In 2006, spending in China on semiconductor equipment from foreign sources was approximately $1.6 billion, according to SEMI. This accounted for nearly 97 percent of the value of all equipment purchases from foreign and domestic sources. China also relies on foreign material imports, including gases and chemicals, to manufacture semiconductors. Although China produces these items, according to industry experts, it does not produce them in sufficient quantity or quality to meet its domestic demand. As China increases its integrated circuit production capacity, its materials consumption also grows, increasing its reliance on imported materials. China also relies on partnerships with foreign companies to fuel its technology developments. Through joint ventures or incentive programs to encourage international companies to locate to China, China has gained access to more advanced technology than it previously had or could produce on its own. In 2002, we reported that five of China’s eight newest major integrated circuit manufacturing facilities were established through joint ventures and the other three were wholly owned foreign companies. One of the five facilities, a semiconductor manufacturing facility owned by Motorola, a wholly owned U.S. company, was sold to SMIC in 2003. The foreign-owned and publicly traded SMIC is the largest and most advanced integrated circuit company in China. China has continued to acquire advanced technology through partnerships. For example, SMIC obtained advanced manufacturing know-how on the production of integrated circuits specifically for memory through a cooperative arrangement with a German company, Infineon, in 2006. Moreover, in December 2007, SMIC obtained a license from IBM, a U.S.-based company, to use its 45- nanometer manufacturing technology in the production of integrated circuits for mobile applications such as cell phones. Likewise, HynixST, another new facility in China established in 2006, which is a joint venture between Korea’s Hynix and Switzerland’s STMicroelectronics, also provided China with advanced commercial integrated circuit memory technology. China is developing the domestic capability to build some types of semiconductor manufacturing equipment, which might eventually reduce its dependence on foreign-sourced equipment. Currently, China has more than 50 companies that produce equipment for semiconductor manufacturing, 2 of which, Advanced Micro-Fabrication Equipment and North Microelectronics, produce equipment to manufacture advanced integrated circuits, according to SEMI. Equipment made by these companies is found in some of China’s most advanced fabrication facilities such as SMIC, although the equipment is being tested still and is not being used to manufacture integrated circuits for commercial purposes. Chinese- made equipment constitutes a small but growing share of domestic equipment purchases. In 2006, SEMI estimates that semiconductor manufacturers in China purchased $56 million in domestically produced equipment, about 3 percent of all equipment purchases by Chinese manufacturers, but more than double the value of domestic equipment purchases made in 2003. Despite recent advances, China cannot domestically produce all of the equipment needed to manufacture advanced semiconductors. For example, China lacks a domestic source of lithography equipment, which is used to imprint circuits on semiconductor materials and is necessary to advance reductions in feature size. The United States also lacks a domestic source of state-of-the-art lithography equipment. The last remaining competitive U.S. manufacturer, Silicon Valley Group, was sold to a Dutch company in 2001. Japan and The Netherlands are currently the global leaders in the manufacture of lithography equipment. China is able to expand its capacity for manufacturing integrated circuits through used semiconductor manufacturing equipment purchases. Although purchases of used equipment do not enhance China’s ability to produce advanced integrated circuits, they do provide China with an ongoing source of equipment to expand its production capacity. Additionally, used equipment may enable the production of integrated circuits for China’s military since military systems generally are designed around older technology, not state-of-the-art semiconductors. For instance, we reported in 2002 that China’s most sophisticated production facilities, although about 2 years behind U.S. state of the art, were nonetheless capable of producing integrated circuits that were more advanced than those used in some of the most advanced U.S. weapons. Both U.S. and Chinese production capabilities have advanced since 2002, but the same paradigm—military systems generally are designed around older technology—still exists. Thus, China’s ability to manufacture less sophisticated chips through purchases of used equipment potentially enhances its military capabilities. Before the introduction of the VEU program, export licenses provided the only mechanism by which U.S. companies could ship most advanced semiconductor manufacturing equipment and materials to China. Export licenses are assessed individually by an interagency team based on such factors as the item, its intended end use, and the end-user. A license also may contain conditions, including the requirement for postshipment verification, to ensure the item is used as intended. The VEU program, introduced in June 2007, marks a shift toward a more end-user-based system of export controls by allowing select, pre-screened Chinese entities to receive certain controlled items, including semiconductor manufacturing equipment and materials, without a license. The program established recordkeeping requirements to provide assurance that items exported under the VEU program are being used as intended, and recipients must agree to host discretionary on-site reviews by U.S. government personnel. Before Commerce introduced the VEU program in 2007, export licenses provided the only means for U.S. companies to export most advanced semiconductor equipment and materials to China. An export license authorizes the export, reexport, or transfer of a specific item or items to a specified recipient. Commerce administers the export licensing system for dual-use exports, including semiconductor equipment and materials, with input from DOD, State, and DOE. Each agency makes a recommendation to approve, deny, or return a license application without action. Disagreements over the disposition of a license are resolved through a dispute resolution process. The intelligence community can provide information to the interagency review team on prospective end- users, although they do not make licensing recommendations. Interagency reviewers consider a number of factors in evaluating export license applications for semiconductor equipment and materials to China, including the type and quantity of items to be shipped, the end-user and stated end use, and foreign availability. To establish the identity and reliability of prospective recipients, reviewing agencies may request that Commerce check the “bona fides” of the recipient of the technology prior to shipment, also known as a prelicense check. They also may condition approval of a license on the exporter or end-user meeting certain conditions. For instance, a license condition might specify how an item should be used or require that Commerce conduct a postshipment verification check on the recipient of the item. The VEU program, announced by Commerce in June 2007, marks a shift to a more end-user-based system of export controls for semiconductor equipment and materials by allowing the export of some items to China without a license. The VEU program operates in parallel to the existing export control framework and is designed for “trusted” Chinese companies with a long licensing history and a record of using U.S.-controlled items for civilian end uses. Among the first five companies certified as validated end- users in October 2007, three are authorized to receive semiconductor equipment or materials. Prospective validated end-users, or others applying on their behalf, must submit an application to Commerce that includes, among other things, a list of items they wish to receive under the VEU authorization, the locations where these items will be received, and a commitment to accept on-site visits by U.S. government personnel. In evaluating applications, Commerce conducts a four-part internal review that includes: Compliance. Verifies whether the applicant has met all the regulatory requirements specified in the EAR, confirms the applicant’s ownership and organizational structure, reviews the candidate’s licensing and compliance history, and assesses the candidate’s proposed compliance plan. Enforcement. Confirms whether information presented in the candidate’s application materials is consistent with its licensing and enforcement history and whether there is any adverse enforcement information on the applicant. Item and end use. Analyzes the items requested by the applicant and their appropriateness given the stated end use and the company’s business activities. Intelligence. Vets parties to the application through the intelligence community. Once Commerce’s internal review has been completed, applications are reviewed by an End-User Review Committee (ERC), which is comprised of representatives from the Departments of Defense, Energy, and State (generally the same agencies that review export license applications), and other agencies as appropriate. In reviewing an end-user for eligibility, the ERC considers a range of factors, such as the entity’s exclusive engagement in civil end-use activities, its record of compliance with U.S. export controls, its ability to meet the VEU program’s recordkeeping requirements, its relationship with U.S. and foreign companies, and its willingness to host on-site reviews by U.S. government personnel to ensure program compliance. According to a Commerce official, applicants need to demonstrate that they either have or will have the requisite controls and data collection systems in place to ensure compliance with the terms of the VEU program. Such controls would provide Commerce with a high degree of confidence that on-site reviews will yield useful information. Validated end-users should also have systems in place to demonstrate that items imported under the VEU program are used for civilian purposes. For example, validated end-users may have systems in place that are capable of tracking customer orders so that Commerce can verify customer lists during on-site reviews. Commerce allows VEU applicants the flexibility to determine how they will demonstrate that they are capable of meeting these internal control and recordkeeping requirements. If necessary, the ERC also may request a preapproval visit to assess an entity’s suitability for validated end-user status. According to officials from Commerce, no preapproval visits were conducted for the first five Chinese entities approved for validated end-user status because members of the ERC were already familiar with the companies through their extensive licensing history. A unanimous vote is required by the ERC to approve validated end-user status or add items to an existing authorization. Revocation of an authorization or removal of items from an existing authorization is by majority rule. Disagreements among agencies on the ERC are managed through a dispute resolution process similar to the procedures used for export license applications. For a comparison of the similarities and differences between a license and the VEU authorization, see appendix IV. The individual licensing system and VEU program employ different approaches to ensure that U.S. exports of semiconductor equipment and materials to China are used as intended. Under the individual licensing system, Commerce scrutinizes each individual application for the appropriateness of the item and the end-user, often attaches conditions to the license stipulating how an item may be used, and conducts postshipment verification (PSV) checks. Under the VEU program, Commerce ensures that items are used as intended by vetting validated end-users, stipulating conditions to approved entities, and confirming compliance with these conditions through periodic records checks and discretionary on-site reviews. Under the individual licensing system, most licenses for semiconductor manufacturing equipment are issued with conditions that require the recipient to abide by certain requirements. For example, a license authorizing the export of semiconductor manufacturing equipment to a Chinese entity might restrict the equipment to civilian end-uses or prohibit the equipment from being used to produce integrated circuits with components smaller than a certain feature size. Because U.S. exporters are the licensees under this system, they are required to communicate the specific license conditions to Chinese recipients or other parties to whom the conditions may apply. To verify compliance with license conditions and prevent the diversion of equipment and materials to unauthorized end-users, the United States also conducts PSV checks on a case-by-case basis in China. According to Commerce officials, several criteria are used to determine which entities will be subject to PSVs, including information on the exporter and end-user, the sensitivity of the item, and the quantities in question. During PSV checks, Commerce special agents or other U.S. government personnel visit importers or end-users to confirm the use and location of the items listed on the license. Commerce’s PSV guidelines require that the agent physically inspect the items or the records that detail their disposition, verify that the items are located at the specified facility, and confirm that the equipment is being used for the purposes stated on the license. Agents also are instructed to document cases where there are indications of impropriety or whether the company’s answers to questions are evasive. Before 2004, the United States was able to conduct only a limited number of PSV checks on semiconductor equipment and materials in China because the Chinese government restricted the number and scope of PSV checks that it would allow. In 2004, the United States and China signed an EUVU that established protocols for PSV checks and expanded the number of checks that the United States would be allowed to conduct. Nevertheless, PSV checks on semiconductor equipment and materials in China remain limited. From fiscal years 2002 through 2007, Commerce approved 1,466 licenses for the export of semiconductor equipment and materials to China. Nine hundred-three, or about 62 percent, of these licenses contained a condition requiring Commerce to conduct a PSV check. Commerce restricted GAO from reporting the number of PSV checks conducted in China overall and for semiconductor equipment and materials in China, for the purposes of this report. Although this information has previously been available in public sources, Commerce asserted that publicly disclosing this data would give export violators or potential violators, both in the United States and abroad, sensitive information, including information revealing Commerce’s focus within particular countries and on the kinds of items it checks most often. In contrast, under the VEU program, Commerce intends to ensure that semiconductor equipment and materials are used as intended by requiring that validated end-users maintain a record of transactions, conducting periodic reviews of recorded exports under the VEU authorization, and undertaking discretionary on-site reviews to verify compliance with the terms of the program. Unlike individual export licenses, validated end-user status is awarded directly to entities in China, and these entities are responsible for meeting the program’s requirements. The EAR requires that all validated end-users legally certify that items obtained under the VEU program will be used only at approved facilities and exclusively for civilian end-uses. Furthermore, the advisory opinions issued by Commerce to each validated end-user stipulate additional requirements that are similar to license conditions. For example, advisory opinions may detail specific recordkeeping, reporting, and customer screening requirements, or restrict the type and technical parameters of semiconductors that can be manufactured using equipment or materials shipped under the authorization. To ensure compliance with the terms of the VEU program as outlined in the EAR and individual advisory opinions, the Commerce official responsible for the VEU program stated that the department is already conducting or plans to conduct three layers of reviews. First, Commerce is conducting mandatory, semiannual reviews of each validated end-user. The information for the review is obtained from a variety of sources including public data, mandatory reporting by the validated end-user, and assessments provided by the intelligence community and BIS’s enforcement unit. Second, Commerce is conducting an additional review of each validated end-user 6 months after it receives its first shipment under the VEU program and for every 6 months going forward. This review includes the same elements as the first review, but also includes an examination of data obtained from the Census Bureau’s Automated Export System and U.S. exporters on transactions under the VEU authorization. Third, based on the results of these two reviews, the ERC may decide to conduct an on-site review with the validated end-user. According to Commerce, the procedures for on-site reviews will be determined on a case-by-case basis, as dictated by the specific circumstances of the validated end-user. The VEU program has yet to produce the advantages anticipated by Commerce, and challenges with program implementation may limit Commerce’s ability to ensure that items shipped under the program are being used as intended. Commerce has yet to realize trade gains and enhanced national security because few U.S. exporters have taken advantage of the VEU program. Moreover, Commerce has not reached a VEU-specific agreement with the Chinese government for conducting on- site reviews of validated end-users, a key mechanism for ensuring program compliance. Instead, as a stopgap measure, Commerce is attempting to conduct VEU on-site reviews under a 2004 agreement. Commerce also lacks procedures for selecting and conducting on-site reviews, though it introduced the VEU program in June 2007. Commerce anticipated that one of the advantages of the VEU program is that it would facilitate trade in controlled items to China by removing licensing requirements for the export of certain items to “trusted” Chinese customers with a history of using exports responsibly. According to Commerce, the program would foster trade by reducing the administrative burden associated with seeking an export license for U.S. exporters and enabling validated end-users to obtain items more easily than their domestic competitors. Two companies that received validated end-user authorization stated that the program facilitates long-term planning by eliminating some of the uncertainties associated with obtaining an export license from the U.S. government and other administrative requirements. Additionally, the program allows validated end-users to obtain certain equipment from U.S. companies without having to rely on the exporter to obtain a license, according to a validated end-user. Commerce also anticipated that the VEU program would reduce the volume of licenses required for transactions involving known end-users. In turn, this would enable Commerce to dedicate more resources to transactions and end-users that are less well known, and thus enhance security. However, according to the Director of Commerce’s Office of Technology Evaluation, the department has a finite amount of resources and an increasing number of export licenses to process from year to year. Thus, he noted that it is unclear whether a potential reduction in licensing volume resulting from increased shipments under the VEU program would allow Commerce to increase scrutiny over lesser-known end-users or merely enable Commerce to maintain current levels of oversight. In addition, although Commerce anticipated that one of the program’s benefits would be to increase scrutiny over lesser known end-users, according to the Chairperson of the ERC, it was unclear whether increased shipments under the VEU program would coincide with additional PSV checks for end-users that receive items under individual export licenses due to the multiple considerations involved in scheduling and carrying out PSV checks. The advantages of the VEU program anticipated by Commerce have not yet been realized because few U.S. exporters have shipped items to China under the authorization. Commerce’s ability to realize the anticipated benefits of the VEU program hinges on whether or not exporters choose to use the VEU authorization rather than an individual license. Recognizing the high volume and dollar value of U.S. exports to certain companies in China, Commerce designed the VEU program with the goal of reducing the number of licenses to these types of companies. For instance, according to Commerce, the first five companies designated as validated end-users accounted for 18 percent of the value of licensed trade with China in 2006. Commerce anticipates that approval of a second set of five companies could increase that number to 40 percent. However, as of June 2008, only one of the three validated end-users authorized to receive semiconductor equipment and materials had received any items under the program. Furthermore, according to Commerce, since the first validated end-users were authorized in October 2007, approximately 6 percent of the total exports of semiconductor manufacturing equipment to China have taken place under the VEU program, whereas 94 percent were conducted under an export license. In addition, according to the Chairperson of the ERC, since the VEU program was authorized, three licenses were issued for items that could have been shipped under the VEU program. Company officials that received validated end-user status offered several reasons for not yet fully using the authorization. One company official stated that they were upgrading their administrative systems and planned to switch from their Special Comprehensive License to the VEU program in the fall of 2008. Another validated end-user cited a global economic slowdown in their industry as a reason for not taking advantage of the VEU program. Finally, another company official with validated end-user status noted that some of its suppliers have elected to use existing individual or special comprehensive licenses to avoid the administrative burden and time requirements associated with obtaining an additional End-User Statement from the Chinese government, as recently requested by Commerce. The EAR does not require that validated end-users obtain End- User Statements for shipments received under the authorization. However, in April 2008, Commerce requested that the first five validated end-users seek End-User Statements from the Chinese government to facilitate on-site reviews. Commerce may not be able to ensure that semiconductor equipment and materials exported to China are used as intended because it has not negotiated a VEU-specific agreement with the Chinese government for conducting on-site reviews under the VEU program and lacks specific procedures for carrying out these reviews. On-site reviews are not a mandatory program requirement; rather, they are discretionary based on an assessment of each validated end-user by the ERC. Commerce has stated though that on-site reviews are a key mechanism for ensuring that validated end-users comply with the terms of the authorization and that the ability to conduct meaningful on-site reviews will be a critical factor in Commerce’s long-term support of the program. However, Commerce may be limited in its ability to conduct on-site reviews of validated end-users because it has not negotiated a VEU-specific agreement with the Chinese government for conducting these reviews. In October 2007, the Chinese Ministry of Commerce announced that Chinese entities were prohibited from hosting foreign governments, including the U.S. government, for interviews or investigations related to export controls without its permission. The Chinese government has also asked that Commerce refrain from approving any new, additional validated end-users until the two sides can agree on terms for conducting on-site reviews of validated end-users. A senior official from the Chinese Ministry of Commerce stated to us during a March 2008 meeting that the Ministry wants on-site reviews to be conducted either according to the terms of the 2004 EUVU, or under a newly negotiated U.S.-China agreement specific to the VEU program. In the absence of a new agreement specific to the VEU program, Commerce has requested to conduct one on-site review pursuant to the terms of the 2004 EUVU as a stopgap measure. However, to conduct on-site reviews under the 2004 agreement, Commerce relies on the voluntary compliance of validated end-users to obtain End-User Statements from the Chinese Ministry of Commerce, as this requirement was not included in the regulations establishing the VEU program. According to Commerce, Chinese officials were receptive to its request for an on-site review, but MOFCOM and Commerce have agreed to postpone the check under the existing EUVU mechanism, as negotiations on the VEU-specific protocol are still in progress. Commerce’s ability to ensure that items shipped under the VEU program are being used as intended is further limited by a lack of procedures for selecting and conducting the on-site reviews. Commerce officials stated that criteria that could be considered for on-site review selection include the volume of items shipped, the geographic location of the validated end- user, the civil or military utility of the authorized items, foreign or U.S. company ownership, the facility’s licensing history, and intelligence reporting. In April 2008, 10 months after Commerce approved the VEU program, draft procedures for selecting end-users for on-site reviews were disseminated to ERC members. However, as of September 2008, interagency agreement on these procedures had not been reached. Moreover, Commerce has not developed procedures for conducting on-site reviews. During our field work in March 2008 in China, Commerce’s export control officer in Beijing noted that it was unclear how on-site reviews would be conducted because she was unaware of procedures governing them. Thus, even if the Chinese Ministry of Commerce grants permission to conduct an on-site review, it is unclear how the review would be conducted in the absence of any final procedures. Commerce asserted that it plans to develop on-site review procedures on a case-by-case basis to ensure that each on-site review is tailored to a particular validated end-user. We assessed progress that the Departments of Commerce and Defense have made to address the recommendations from our 2002 report. In 2002, we recommended that Commerce and DOD conduct a foreign availability assessment to determine if semiconductor equipment and materials of comparable quality are available in quantities that would render U.S. export controls on these items ineffective. We also recommended that Commerce and DOD assess the cumulative effects that exports of semiconductor equipment and materials to China have had on the U.S. economy and national security. Although Commerce and DOD have not formally assessed the foreign availability of semiconductor equipment and materials, Commerce has taken some steps to meet the intent of our recommendation by using information on foreign availability to inform export controls and make licensing decisions for these items. Neither Commerce nor DOD has conducted assessments on the cumulative effect of U.S. semiconductor-related exports to China. Commerce and DOD have not conducted a formal foreign availability assessment for semiconductor manufacturing equipment and materials, but Commerce does use the information on foreign availability that it obtains from other sources to inform export controls and licensing decisions for these items, and thus met the intent of our recommendation. Commerce stated that the key reason for not conducting a foreign availability assessment of semiconductor manufacturing equipment and materials is that semiconductor equipment manufacturers have not requested one. Commerce also noted that there is substantial public information on foreign availability of semiconductor manufacturing equipment and materials in China. In response to our prior recommendation, Commerce noted that, while the EAR allows the U.S. government to initiate a foreign availability assessment, this provision is intended primarily to be used by industry to challenge overly restrictive or ineffective export controls. In recent years, semiconductor equipment manufacturers have not requested Commerce to conduct a foreign availability assessment because the assessments require a significant amount of effort, and previous efforts have not resulted in the decontrol of any equipment. For instance, SEMI indicated that it submitted, for the United States to discuss at Wassenaar, a number of proposals to decontrol items, which were unsuccessful. Furthermore, SEMI indicated that the scope of the issue regarding controls over semiconductor manufacturing equipment is too large for them to undertake without a serious commitment from the U.S. government. Commerce has taken steps to address our 2002 recommendation, however, by evaluating foreign availability when assessing export controls and making licensing decisions related to semiconductor manufacturing equipment and materials. Information provided by a number of sources— including technical advisory groups, the public, and reviews of license applications—informs both export controls and licensing decisions. For example, Commerce receives information about foreign availability from its technical advisory committee and uses this information to develop proposals during multilateral discussions and for Commerce Control List reviews. The proposals could result in the addition or elimination of a control. For example, according to Commerce, a number of adjustments or liberalizations to controls for semiconductor manufacturing equipment and materials have occurred as a result of this input. Commerce also seeks information regarding foreign availability from the public. For example, during the development of the China military end-use rule, Commerce originally planned to control 47 items. Commerce published a notice in the Federal Register seeking information, including whether or not the items they were seeking to control were available in foreign markets. In response to the feedback it received and analysis it conducted, Commerce reduced the list of items controlled from 47 to 31. Included among the items were several types of lower-level semiconductor manufacturing equipment that Commerce determined were available from sources outside Wassenaar. Commerce also noted that, as China has emerged as a significant semiconductor manufacturer over the past decade, U.S. export control officials have become knowledgeable about different “players” in China as well as the various sources of supply for controlled semiconductor manufacturing equipment. According to Commerce, officials incorporate this information into the licensing review process. Commerce also stated that a formal foreign availability study would reveal limited additional information that Commerce does not already have access to through existing sources. We believe that Commerce’s efforts are sufficient to address our 2002 recommendation. Although Commerce has not conducted a formal foreign availability study to determine whether there are comparable foreign sources of semiconductor equipment and materials, it has used other sources—including technical advisory committees, end-use visits, past licensing history, and the public—to obtain similar information. Provided that Commerce continues to be able to access this information, we do not believe that an additional foreign availability study is necessary at this time. Neither Commerce nor DOD have addressed our recommendation to conduct assessments on the economic and security effects of U.S. semiconductor manufacturing equipment exports to China. In September 2006, Commerce established the Office of Technology Evaluation (OTE) to help gauge the effectiveness of U.S. export controls by conducting studies on the cumulative effects of transfer of certain key technologies, among other studies. In its Fiscal Year 2006 Annual Report, Commerce announced that it planned to conduct two studies related to the semiconductor and semiconductor manufacturing equipment industries. First, Commerce stated that it planned to conduct an industrial base assessment on U.S. integrated circuit design and manufacturing capability. According to the Director of OTE, Commerce has initiated this study, and it is nearing completion. Second, Commerce announced plans to conduct an evaluation of the health and competitiveness of U.S. industry engaged in developing critical semiconductor manufacturing equipment technology. However, OTE decided not to conduct the evaluation; according to the former Director of the Office of National Security and Technology Transfer Controls, BIS discussed the possible study with representatives from the semiconductor manufacturing equipment industry, and they collectively decided that it was not needed. We also noted in 2002 that Directive 2040.2 required DOD to “assess annually the total effect of transfers of technology, goods, services, and munitions on U.S. security regardless of the transfer mechanisms involved.” However, the directive had not been updated since July 5, 1985, and no such studies had been completed since the issuance of the original directive. In July 2008, DOD released a revised “instruction” that addresses the need for cumulative effects studies but eliminates the specific requirement to do a study and the annual requirement. The instruction instead calls for the Director of the Defense Intelligence Agency to provide intelligence concerning the total effect of transfers of dual-use and defense- related technology, articles, and services on U.S. security. However, no such studies have been completed to date, and the impact of exports of semiconductor manufacturing equipment on U.S. national security thus remains unclear. U.S. export control policy aims to balance two competing interests— promoting trade to civilian end-users and denying trade in sensitive technologies to end-users engaged in activities detrimental to national security interests. The migration of commercial semiconductor production to China and continued advances in China’s domestic manufacturing capabilities illustrate the challenges inherent in meeting these dual goals. Integrated circuits, for example, are not only inputs to consumer products but are often used in weapons systems. Although the VEU program is aimed at facilitating trade, Commerce built in various “safeguards” to ensure that items are used as intended and not to enhance military capabilities. Although the companies authorized under the VEU program have a long history of using exports responsibly, Commerce included a requirement for end-users to commit to accept on-site reviews to ensure that items are used as intended and indicated that the ability to conduct these reviews is a critical factor in its long-term support of the program. Commerce, however, established the VEU program in June 2007 and authorized the first five companies in October 2007, without negotiating a VEU-specific agreement or amending the 2004 EUVU with China to conduct the reviews. Additionally, Commerce instituted the program without some basic mechanisms to ensure compliance with program requirements, including criteria for selecting on-site reviews and the procedures for conducting them. As a result, Commerce now relies on the EUVU procedures, which require an End-User Statement, burdening validated end-users and hindering trade facilitation. Additionally, if validated end-users do not voluntarily seek an End-User Statement, Commerce may not conduct on-site reviews and therefore will not be able to provide assurance that exported items are being used as intended. To better promote the Validated End-User program’s objective of trade facilitation and enhanced oversight, the Secretary of Commerce should suspend the Validated End-User program to China until a VEU-specific agreement and procedures are established for on-site reviews. Specifically, Commerce should (1) negotiate a VEU-specific agreement with the Chinese government to conduct on-site reviews or amend the 2004 EUVU to include the Validated End-User program, and (2) develop procedures for conducting on-site reviews that are applicable to all validated end-users. We provided a draft of this report to the Departments of Commerce, Defense, Energy, and State for their review and comment. Commerce provided written comments, which are reprinted in appendix V. Defense and State did not provide comments. Commerce and Energy’s National Nuclear Security Administration also provided technical comments, which we incorporated as appropriate throughout this report. Commerce disagreed with our recommendations, stating that the report’s premise— that the VEU program has no adequate mechanism to oversee exports of semiconductor equipment to China—is incorrect. Commerce stated that on-site reviews could be conducted under the 2004 EUVU or a VEU-specific addendum to the EUVU, which it is currently negotiating with the Chinese government. Commerce also asserted that procedures for selecting on-site reviews exist, that general procedures for end-use checks are in place, and that specific guidance for on-site reviews must be developed on a case-by-case basis. We have modified the report to acknowledge that Commerce intends to use a stopgap mechanism, the 2004 EUVU, which may enable it to conduct on- site reviews for items exported under the VEU program to China. However, this agreement requires companies to obtain an End-User Statement from the Chinese government. This statement is not required under the VEU program and thus imposes an additional burden on validated end-users, running counter to the trade-facilitating objectives of the program. To achieve the intended benefits of the VEU program, Commerce needs to negotiate a VEU-specific agreement or amend the 2004 EUVU to accommodate the distinct features of the VEU program. We disagree with Commerce’s assertion that it has sufficient procedures for selecting on-site reviews and conducting end-use checks. Commerce has consistently stated that on-site reviews for validated end-users and end-use checks for individual licenses are distinct activities serving different purposes. End- use checks focus on ensuring that an item is being used for the purposes stated in the license, whereas on-site reviews are more comprehensive. Additionally, the procedures for selecting validated end-users for on-site reviews are still in draft form as of September 2008, and have not been cleared by the interagency process as Commerce implied in its comments. Commerce would not provide us with a copy of these draft procedures. Finally, we agree that Commerce needs additional case-by-case guidance for on-site reviews to ensure that each review is tailored to the particular validated end-user. However, the department also needs general procedures to ensure that on-site reviews are conducted in a consistent manner. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8979 or christoffj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. This report discusses the (1) evolution of China’s semiconductor manufacturing capabilities since 2002, (2) changes to U.S. export control policies over the sale of semiconductor manufacturing equipment and materials to China since 2002, and (3) the advantages and limitations of these changes to U.S. export controls. In addition, this report describes progress the Departments of Commerce and Defense have made to address our prior recommendations. To describe how China’s semiconductor manufacturing capabilities have evolved since 2002, we reviewed available literature; interviewed government officials, industry representatives, and academics; and reviewed information from the U.S. and Chinese semiconductor and semiconductor equipment and materials industries. We also traveled to China and met with companies involved in manufacturing semiconductors. We met with officials from the U.S. Departments of Commerce (Commerce), Defense (DOD), Energy (DOE), and State (State), as well as members of the intelligence community. We also visited three DOE National Laboratories, including Lawrence Livermore in Livermore, California; Kansas City in Kansas City, Missouri; and Sandia in Albuquerque, New Mexico. We interviewed representatives from two U.S. semiconductor manufacturing companies—IBM and Intel. We met with officials from the Semiconductor Equipment and Materials International (SEMI) and Semiconductor Industry Association— the trade associations representing each industry—in San Jose, California; Washington, D.C.; and Shanghai, China. We also interviewed academics with expertise in China’s semiconductor industry and semiconductor manufacturing at the University of Maryland’s Center for Advanced Life Cycle Engineering and the University of California, Berkeley’s Microfabrication Laboratory. We obtained trade reports on China’s semiconductor and semiconductor manufacturing equipment industries from SEMI and met with the reports’ authors in Shanghai, China, to discuss their methods and findings. We determined that the data collected and analyses conducted were sufficiently reliable for our use in this report. In addition, we visited three companies—Applied Materials China, Hua Hong NEC, and Semiconductor Manufacturing International Corporation—that are involved in manufacturing semiconductors in China and have received semiconductor manufacturing equipment and materials from U.S. exporters under export licenses and the Validated End-User authorization. We also attended SEMICON China—a semiconductor industry trade show with more than 900 U.S., Chinese, and other foreign companies represented. During SEMICON China, we met with an indigenous Chinese semiconductor equipment manufacturer, Advanced Micro-Fabrication Equipment, Inc., and obtained information from two other indigenous equipment manufacturers—Beijing Seven Star HuaChuang Electronics Company, Limited, and North Microelectronics Company, Limited. To describe changes to U.S. policies and practices for the export of semiconductor equipment and materials to China since 2002, we reviewed the relevant statutes, regulations, Presidential and DOD Directives and DOD Instruction pertaining to export controls to China and interviewed officials from the Departments of Commerce, DOD, DOE, and State. We also interviewed representatives of semiconductor and semiconductor equipment companies—including Applied Materials, IBM and Intel in the United States, and Applied Materials China, Hua Hong NEC, and Semiconductor Manufacturing International Corporation in China—that received controlled items under U.S. export licenses and the Validated End- User authorization. We analyzed export licensing data provided by Commerce to describe the number of licenses approved for semiconductor equipment and materials to China, as well as the number of licenses containing the requirement to conduct postshipment verification (PSV) checks, from fiscal years 2002 through 2007. We determined that these data were sufficiently reliable for the purposes for which they are presented in this report. We also reviewed reports provided by Commerce on the number and outcomes of end-use checks, including prelicense and PSV checks in China. Although Commerce provided GAO with data on end-use checks, it restricted us from publicly reporting the number and outcomes of PSV checks conducted in China on shipments of semiconductor equipment and materials. According to Commerce, publicly disclosing this data would give export violators or potential violators, both in the United States and abroad, sensitive information, including information revealing Commerce’s Bureau of Information and Security’s (BIS) focus within particular countries and on the kinds of items the BIS checks most often. To assess the advantages and limitations associated with changes to U.S. export control policies and practices, we reviewed the regulations, guidelines, and procedures governing export licenses and the VEU program. We also interviewed U.S. government officials in the United States and China, including Commerce’s export control officers in Beijing and Hong Kong responsible for conducting end-use checks. We interviewed representatives from Applied Materials China, Hua Hong NEC, and Semiconductor Manufacturing International Corporation, three of the five companies that received the validated end-user authorization and the only entities that are permitted to receive semiconductor equipment and materials under the authorization. Additionally, we met with and interviewed officials from China’s Ministry of Commerce in Beijing, China. Finally, to determine whether or not DOD and Commerce addressed our 2002 recommendations to conduct assessments related to foreign availability and the cumulative effects of semiconductor manufacturing equipment exports on U.S. national security, we interviewed officials from both agencies, and reviewed regulations, directives, and an instruction, as well as documentation related to conducting these activities. We also discussed the topic of foreign availability with industry representatives including SEMI, Applied Materials, and Intel to ascertain if foreign availability continues to be a concern, as it was in 2002. We conducted this performance audit from October 2007 to September 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Ion implantation equipment Used for radiation-hardened circuitry and state-of-the-art Needed for all state-of-the-art commercial and military electronics. Enables the production of controlled analog-to- digital converters (ADCs), field programmable logic devices (FPLDs), and application specific circuits (ASICs). Needed for all state-of-the-art commercial and military electronics. Enables the production of controlled analog-to- digital converters (ADCs), field programmable logic devices (FPLDs), and application specific circuits (ASICs). Needed for all state-of-the-art commercial and military electronics. Enables the production of controlled analog-to- digital converters (ADCs), field programmable logic devices (FPLDs), and application specific circuits (ASICs). Needed for all state-of-the-art commercial and military electronics. Enables the production of controlled analog-to- digital converters (ADCs), field programmable logic devices (FPLDs), and application specific circuits (ASICs). Imprint lithography systems Needed for all state-of-the-art commercial and military electronics. Enables the production of controlled analog-to- digital converters (ADCs), field programmable logic devices (FPLDs), and application specific circuits (ASICs). Needed for all state-of-the-art commercial and military electronics. Enables the production of controlled analog-to- digital converters (ADCs), field programmable logic devices (FPLDs), and application specific circuits (ASICs). Needed for all state-of-the-art commercial and military electronics. Enables the production of controlled analog-to- digital converters (ADCs), field programmable logic devices (FPLDs), and application specific circuits (ASICs). Needed for all state-of-the-art commercial and military electronics. Enables the production of controlled analog-to- digital converters (ADCs), field programmable logic devices (FPLDs), and application specific circuits (ASICs). Needed for all state-of-the-art commercial and military electronics. Enables the production of controlled analog-to- digital converters (ADCs), field programmable logic devices (FPLDs), and application specific circuits (ASICs). Needed for all state-of-the-art commercial and military electronics. Enables the production of controlled analog-to- digital converters (ADCs), field programmable logic devices (FPLDs), and application specific circuits (ASICs). Needed for all state-of-the-art commercial and military electronics. Enables the production of controlled analog-to- digital converters (ADCs), field programmable logic devices (FPLDs), and application specific circuits (ASICs). Needed for all state-of-the-art commercial and military electronics. Enable the production of controlled analog-to-digital converters (ADCs), field programmable logic devices (FPLDs), and application specific circuits (ASICs). These end items are controlled under the Export Administration Regulations in ECCN 3A001 and ECCN 3A101, or under the International Traffic in Arms Regulations Category XI. Europe, Japan, United States Commerce Control List, Category Three – Electronics. 15 C.F.R. Supplement No. 1 to Part 774. MOCVD - metal organic chemical vapor deposition reactors. MBE – molecular beam epitaxy euipment. Individual Validated License (IVL) Special Comprehensive License (SCL) An export license authorizing a transaction or series of transactions between a single exporter and recipient. An export license authorizing multiple exports and re-exports between a single exporter and recipient, without review and approval of each transaction. An authorization that permits the export, re-export, and transfer to validated end-users of any eligible items that will be used in a specific, eligible destination. Generally 2 years. 4 years; may be extended for an additional 4 years. VEU authorizations are valid in perpetuity unless revoked. Representatives of the Departments of Commerce, Defense, State, and Energy review each license unless a reviewing agency has delegated its reviewing authority back to Commerce. Same as IVL. A committee of representatives from the Departments of State, Defense, Energy, Commerce, and other agencies, as appropriate, approves the VEU authorization. Dispute resolution process Yes, any agency that disagrees with a licensing decision may escalate a case. Yes, any agency that disagrees with a licensing decision may escalate a case. Yes, any agency that disagrees with a VEU authorization decision may escalate a case. For items controlled for national security reasons, type and quantity of the item; intended end-use of the item (military or civilian); foreign availability; and destination country. Proposed end-use and end-users; past licensing history; evidence of continuous large volume of exports; and compliance with U.S. export controls. Involvement only in civilian activities; previous compliance with U.S. export controls; agreement to “preapproval” visit and on-site reviews; and ability to comply with program requirements. Commerce may place conditions on the use of an IVL. Commerce may place conditions on the use of an SCL. Commerce may specify conditions on the use of VEU authorization. Formal internal control program not required. Internal control programs are required for the exporter and consignee. Although a formal internal control plan is not required, the applicant must describe the system that is in place to ensure compliance with VEU requirements. Individual Validated License (IVL) Special Comprehensive License (SCL) IVL holders generally are required to retain all records supporting their license applications for 5 years. SCLs include record-keeping requirements for SCL holders and consignees. IVL holders have no reporting requirements unless imposed by a particular license. However, the records of their exports are subject to inspection at Commerce’s request. SCL holders must report semi- annually exports of certain items controlled multilaterally under the Wassenaar Arrangement. In addition, the records of their exports are subject to inspection at Commerce’s request. Re-exporters are required to submit semi-annual reports to Commerce, and exports under the VEU program of certain items controlled multilaterally under the Wassenaar Arrangement must be reported semi-annually. Exporters and validated end- users are required to retain records but do not have reporting requirements unless otherwise specified by Commerce. Yes, end-use visits may be conducted under a 2004 agreement between the United States and China. Yes, end-use visits may be conducted under a 2004 agreement between the United States and China. Validated end-users agree to host on-site reviews. Commerce plans to conduct these reviews under a 2004 agreement between the United States and China until an addendum to this agreement or a new, VEU- specific agreement is reached. Not all factors considering in granting a license or validated end-users are included here. Among the other factors considered in granting a license or VEU status are those included in 15 C.F.R. § 742.4(b)() (licenses for national security items), 15 C.F.R. § 75.2(d) (Special Comprehensive Licenses), and 15 C.F.R. § 74.15(a)(2) (Validated End-Users). The following are GAO’s comments on the Department of Commerce’s letter dated September 5, 2008. 1. We have modified our draft report to indicate that Commerce intends to use the 2004 End Use Visit Understanding (EUVU) as a stopgap measure to conduct on-site reviews under the VEU program. However, as we note in the following comment, this stopgap measure imposes an additional burden on VEU-authorized companies. Moreover, the Chinese government has not always agreed with this approach. In 2007, China’s Ministry of Commerce issued a decree prohibiting Chinese entities from accepting on-site reviews conducted by foreign government personnel without its permission. The Chinese government also requested that the United States refrain from approving any new validated end-users until the two countries agreed on the terms for conducting these reviews. 2. We understand that the intent of the VEU program is to enhance and facilitate trade between the United States and China. Our report notes that the VEU program would foster trade by reducing the administrative burden associated with seeking an export license for U.S. exporters and enabling VEU-authorized entities to obtain items more easily than their domestic competitors. Commerce asserted that it can use the EUVU procedures for inspecting shipments made under the VEU program, but it can only do so by requesting validated end- users to voluntarily obtain End-User Statements from the Chinese government. Such statements are required for all exports of controlled items to China under individual export licenses that exceed $50,000. However, these statements were not required under the VEU program and impose an additional burden on VEU-authorized companies. Commerce notes that the procedures for obtaining these statements are cumbersome and conflict with the trade facilitating objective of the VEU program. Until VEU negotiations with the Chinese government are completed, the trade-enhancing benefits of the program may not be realized. 3. We disagree with Commerce’s assertion that general procedures for selecting and conducting on-site reviews do exist. First, as noted in this report, the procedures for selecting which validated end-users will receive on-site reviews are still in draft form as of September 2008 and have not been cleared by the interagency process. Commerce would not provide us with a copy of these draft procedures. Second, Commerce’s general procedures for conducting end-use checks are not specific to the VEU program. Instead, they were designed for pre- license and postshipment verification checks of items shipped under individual export licenses. End-use checks focus on ensuring that an item is being used for the purposes stated in the license, whereas on- site reviews are more comprehensive. In the course of our work, Commerce repeatedly asserted that end-use checks for individual licenses and on-site reviews under the VEU program are distinct activities that serve different purposes. Finally, we agree that Commerce needs additional case-by-case guidance for on-site reviews to ensure that the review is tailored to a particular validated end-user. However, the department also needs general procedures to ensure that on-site reviews are conducted in a consistent manner. 4. Commerce’s comment is perplexing since the department appears to be contending that semiconductors do not provide the United States with a strategic military advantage. As evidence, Commerce notes that semiconductors are included on the Wassenaar Arrangement Basic List, rather than its Sensitive or Very Sensitive List. However, Commerce understates the military significance of items on Wassenaar’s Basic List. According to Wassenaar’s Basic List criteria, items to be controlled are those which are “major or key elements for the indigenous production, use, or enhancement of military capabilities.” Furthermore, semiconductor manufacturing equipment is not only controlled on the Basic List. One of the first validated end-users was authorized to receive metal organic chemical vapor deposition reactors (MOCVD), an item included on Wassenaar’s Sensitive List, under the VEU program in China. This equipment may be used to produce radiation-hardened electronics, for use in commercial and military applications. 5. Commerce noted that we needed to make clear that the report describes the capabilities of companies in China rather than those of the government. We agree and have made changes to the report to reflect this distinction. 6. We have revised the report to clarify that, before the introduction of the VEU program in 2007, export licenses provided the only mechanism by which U.S. companies could ship most advanced semiconductor manufacturing equipment and materials to China. We also note that since the introduction of the VEU program, the majority of semiconductor manufacturing equipment exported to China continues to be made under individual export licenses. We added export data to the report showing that, according to Commerce, during the first 9 months of the VEU program, 94 percent of the total exports of semiconductor manufacturing equipment to China were approved under individual or special comprehensive licenses while 6 percent were authorized under the VEU program. 7. We used the term “trusted” entities because Commerce officials used the same language in discussions with us to describe companies that would be approved as validated end-users. Moreover, Commerce has used the ‘trusted” term in public statements describing the program, including as recently as April, 2008, in testimony before Congress. 8. We have revised our reference to Chinese companies and now refer to these companies as companies or entities in China. In addition to the contact named above, Anthony Moran, Assistant Director; Nabajyoti Barkakati; Lynn Cothern; Julie Hirshen; Drew Lindsey; Grace Lui; and Mark Speight made key contributions to this report. David Dornisch, Etana Finkler, and Minette Richardson also provided assistance. | Semiconductors are key components in weapons systems and consumer electronics. Since semiconductors have both civilian and military applications, U.S. export control policy treats the equipment and materials used to manufacture semiconductors as "dual-use" items, and controls the export of these items through licensing requirements to sensitive destinations such as China. You requested that we update our 2002 report on China's semiconductor manufacturing capabilities to address the (1) evolution of China's capabilities since 2002, (2) changes to U.S. export control policies over the sale of semiconductor manufacturing equipment and materials to China since 2002, and (3) the advantages and limitations of these changes. The gap between U.S. and Chinese commercial semiconductor manufacturing capabilities, as measured by the feature size of the semiconductors produced, rapidly narrowed between 1994 and 2002. Since 2002, China's semiconductor manufacturing capabilities have continued to advance but remain one generation behind state-of-the-art semiconductors produced in the United States. China's most advanced semiconductor manufacturing companies continue to rely on equipment and materials from the United States, Europe, and Japan to improve their manufacturing capabilities. However, China has developed an indigenous capacity to build some types of advanced semiconductor manufacturing equipment, which may soon provide companies in China with a domestic source of equipment capable of producing semiconductors that are close to state of the art. Since 2002, U.S. export control policies over semiconductor equipment and materials to China have become more "end-user" focused, with the introduction of the Validated End-User (VEU) program, a parallel licensing framework that allows select pre-screened Chinese end-users to receive controlled items, including some semiconductor equipment and materials, without a license. The Department of Commerce anticipated that the VEU program would facilitate trade to China and enhance U.S. security; however, challenges with program implementation may limit Commerce's ability to ensure items are being used as intended. Specifically, Commerce has not reached a VEU-specific agreement with the Chinese government for conducting on-site reviews of validated end-users, a mechanism cited by Commerce as critical for ensuring program compliance. Instead, as a stopgap measure, Commerce is attempting to conduct VEU on-site reviews under a 2004 agreement. In addition, Commerce lacks procedures for conducting on-site reviews, though the validated end-user program was introduced in June 2007. |
VA provides medical services to various veteran populations—including an aging veteran population and a growing number of younger veterans returning from the military operations in Afghanistan and Iraq. VA operates approximately 170 VAMCs, 130 nursing homes, and 1,000 outpatient sites of care. In general, veterans must enroll in VA health care to receive VA’s medical benefits package—a set of services that includes a full range of hospital and outpatient services, prescription drugs, and long-term care services provided in veterans’ own homes and in other locations in the community. The majority of veterans enrolled in the VA health care system typically receive care in VAMCs and community-based outpatient clinics, but VA may also authorize care through community providers to meet the needs of the veterans it serves. For example, VA may provide care through its Care in the Community (CIC) programs, such as when a VA facility is unable to provide certain specialty care services, like cardiology or orthopedics. CIC services must generally be authorized by a VAMC provider prior to a veteran receiving care. In addition to its longstanding CIC programs, VA may also authorize veterans to receive care from community providers through the Veterans Choice Program, a new CIC program which was established through the Veterans Access, Choice, and Accountability Act of 2014 (Choice Act), enacted on August 7, 2014. Implemented in fiscal year 2015, the program generally provides veterans with access to care by non-VA providers when a VA facility cannot provide an appointment within 30 days or when veterans reside more than 40 miles from the nearest VA facility. The Veterans Choice Program is primarily administered using contractors, who, among other things, are responsible for establishing nationwide provider networks, scheduling appointments for veterans, and paying providers for their services. The Choice Act also created a separate account, known as the Veterans Choice Fund, which can only be used to pay for VA obligations incurred for the Veterans Choice Program. The use of Choice funds for any other program requires legislative action. The Choice Act appropriated $10 billion to be deposited in the Veterans Choice Fund. Amounts deposited in the Veterans Choice Fund are available until expended and are available for activities authorized under the Veterans Choice Program. However, the Veterans Choice Program activities are only authorized through August 7, 2017 or until the funds in the Veterans Choice Fund are exhausted, whichever occurs first. As part of the President’s request for funding to provide medical services to veterans, VA develops an annual estimate detailing the amount of services the agency expects to provide as well as the estimated cost of providing those services. VA uses the Enrollee Health Care Projection Model (EHCPM) to develop most elements of the department’s budget estimate to meet the expected demand for VA medical services. Like many other agencies, VA begins to develop these estimates approximately 18 months before the start of the fiscal year for which the funds are provided. Unlike many agencies, VA’s Veterans Health Administration receives advance appropriations for health care in addition to annual appropriations. VA’s EHCPM makes these projections 3 or 4 years into the future for budget purposes based on data from the most recent fiscal year. In 2012, for example, VA used actual fiscal year 2011 data to develop the budget estimate for fiscal year 2014 and for the advance appropriations estimate for fiscal year 2015. Similarly, in 2013, VA used actual fiscal year 2012 data to update the budget estimate for fiscal year 2015 and develop the advance appropriations estimate for fiscal year 2016. Given this process, VA’s budget estimates are prepared in the context of uncertainties about the future—not only about program needs, but also about future economic conditions, presidential policies, and congressional actions that may affect the funding needs in the year for which the estimate is made—which is similar to the budgeting practices of other federal agencies. Further, VA’s budget estimates are typically revised during the budget formulation process to incorporate legislative and department priorities as well as to respond to successively higher levels of review in VA and OMB. Each year, Congress provides funding for VA health care primarily through the following appropriation accounts: Medical Support and Compliance, which funds, among other things, the administration of the medical, hospital, nursing home, domiciliary, construction, supply, and research activities authorized under VA’s health care system. Medical Facilities, which funds, among other things, the operation and maintenance of the Veterans Health Administration’s capital infrastructure, such as the costs associated with nonrecurring maintenance, utilities, facility repair, laundry services, and groundskeeping. Medical Services, which funds, among other things, health care services provided to eligible veterans and beneficiaries in VA’s medical centers, outpatient clinic facilities, contract hospitals, state homes, and CIC services. With the exception of the Veterans Choice Program, which is funded through the Veterans Choice Fund, medical services furnished by community providers have been, and will continue to be, funded through this appropriation account through fiscal year 2016. Starting in fiscal year 2017 and thereafter, with the exception of the Veterans Choice Program, it is anticipated that Congress will fund medical services that VA authorizes veterans to receive from community providers through a new appropriations account—Medical Community Care—which the VA Budget and Choice Improvement Act requires VA to include in its annual budget submission. Higher-than-expected obligations identified by VA in April 2015 for VA’s CIC programs accounted for $2.34 billion (or 85 percent) of VA’s projected funding gap of $2.75 billion in fiscal year 2015. These higher- than-expected obligations for VA’s CIC programs were driven by an increase in utilization of VA medical services across VA, reflecting, in part, VA’s efforts to improve access to care after public disclosure of long wait times at VAMCs. VA officials expected that the Veterans Choice Program would absorb much of the increased demand from veterans for health care services delivered by non-VA providers. However, veterans’ utilization of Veterans Choice Program services was much lower than expected in fiscal year 2015. VA had estimated that obligations for the Veterans Choice Program in fiscal year 2015 would be $3.2 billion, but actual obligations totaled only $413 million. According to VA officials, the lower-than-expected utilization of the Veterans Choice Program in fiscal year 2015 was due, in part, to administrative weaknesses in the program, such as provider networks that had not been fully established and VAMC staff who lacked guidance on when to refer veterans to the program, both of which slowed enrollment in the program. Instead of relying on its Choice Program, VA provided a greater amount of services through its CIC programs, resulting in total obligations of $10.2 billion in fiscal year 2015, which VA officials stated were much higher than expected. The unexpected increase in CIC obligations in fiscal year 2015 exposed weaknesses in VA’s ability to estimate costs for CIC services and track associated obligations. While VA officials first became concerned that CIC obligations might be significantly higher than projected in January 2015, they did not determine that VA faced a projected funding gap until April 2015—6 months into the fiscal year. VA officials made this determination after they compared authorizations in the Fee Basis Claims System (FBCS)—VA’s system for recording CIC authorizations and estimating costs for this care—with obligations in the Financial Management System (FMS)—the centralized financial management system VA uses to track all of its obligations, including those for medical services. In its 2015 Agency Financial Report (AFR), VA’s independent public auditor identified the following issues as contributing to a material weakness in estimating costs for CIC services and tracking CIC obligations: VAMCs individually estimate costs for each CIC authorization and record these estimates in FBCS. This approach leads to inconsistencies because each VAMC may use different methodologies to estimate the costs they record. Having more accurate cost estimates for CIC authorizations is important to help ensure that VA is aware of the amount of money it must obligate for CIC services. VAMCs do not consistently adjust the estimated costs associated with authorizations for CIC services in FBCS in a timely manner to ensure greater accuracy, and they do not perform a “look-back” analysis of historical obligations to validate the reasonableness of estimated costs. Furthermore, VA does not perform centralized, consolidated, and consistent monitoring of CIC authorizations. FBCS is not fully integrated with FMS, VA’s system for recording and tracking the department’s obligations. As a result, the obligations for CIC services recorded in the former system may not match the obligations recorded in the latter. Notably, the estimated costs of CIC authorizations recorded in FBCS are not automatically transmitted to VA’s Integrated Funds Distribution, Control Point Activity, Accounting, and Procurement (IFCAP) system, a procurement and accounting system used to send budgetary information, such as information on obligations, to FMS. According to VA officials, because FBCS and IFCAP are not integrated, at the beginning of each month, VAMC staff typically record in IFCAP estimated obligations for outpatient CIC services, and they typically use historical obligations to make these estimates. Depending on the VAMC, these estimated obligations may be entered as a single lump sum covering all outpatient care or as separate estimated obligations for each category of outpatient care, such as radiology. Regardless of how they are recorded, the estimated obligations recorded in IFCAP are often inconsistent with the estimated costs of CIC authorizations recorded in FBCS. In fiscal year 2015, the estimated obligations that VAMCs recorded in IFCAP were significantly lower than the estimated costs of outpatient CIC authorizations recorded in FBCS. VA officials told us that they did not determine a projected funding gap until April 2015, because they did not complete their analysis of comparing estimated obligations with estimated costs until then. A key factor contributing to the weaknesses identified in VA’s AFR was the absence of standard policies across VA for estimating and monitoring the amount of obligations associated with authorized CIC services. Specifically, in fiscal year 2015, the Chief Business Office within the Veterans Health Administration had not developed and implemented standardized and comprehensive policies for VAMCs, VISNs, and the office itself to follow when estimating costs for CIC authorizations and for monitoring these obligations. The AFR and VA officials we interviewed explained that because oversight of the CIC programs was consolidated under the Chief Business Office in fiscal year 2015 pursuant to the Choice Act, this office did not have adequate time to implement efficient and effective procedures for monitoring CIC obligations. To address the fiscal year 2015 projected funding gap, on July 31, 2015, VA obtained temporary authority to use up to $3.3 billion in Veterans Choice Program appropriations for amounts obligated for medical services from non-VA providers—regardless of whether the obligations were authorized under the Veterans Choice Program or CIC—for the period from May 1, 2015 until October 1, 2015. Table 1 shows the sequence of events that led to VA’s request for and approval of additional budget authority for fiscal year 2015. Unexpected obligations for new hepatitis C drugs accounted for $0.41 billion of VA’s projected funding gap of $2.75 billion in fiscal year 2015. Although VA estimated that obligations in this category would be $0.7 billion that year, actual obligations totaled about $1.2 billion. VA officials told us that VA did not anticipate in its budget the obligations for new hepatitis C drugs—which help cure the disease—because the drugs were not approved by the Food and Drug Administration until fiscal year 2014, after VA had already developed its budget estimate for fiscal year 2015. According to VA, the new drugs cost between $25,000 and $124,000 per treatment regimen, and demand for the treatment was high. Officials told us that about 30,000 veterans received these drugs in fiscal year 2015. In October 2014, VA reprogrammed $0.7 billion within its medical services appropriation account to cover projected obligations for the new hepatitis C drugs, after VA became aware of the drugs’ approval. However, in January 2015, VA officials recognized that obligations for the new hepatitis C drugs would be significantly higher than expected by year’s end, due to higher-than-expected demand for the drugs. VA officials told us that they assessed next steps and then limited access to the drugs to those veterans with the most severe cases of hepatitis C. In June 2015, VA requested statutory authority to use amounts from the Veterans Choice Fund to address the projected funding gap. To help prevent future funding gaps, VA has made efforts to improve its cost estimates for CIC services and the department’s tracking of associated obligations. VA has also taken steps to more accurately estimate future utilization of VA health care services, though uncertainties about utilization of VA health care services and emerging treatments remain. Faced with a projected funding gap in fiscal year 2015, VA made efforts to improve its cost estimates for CIC services as well as the department’s tracking of associated obligations. First, in August 2015, VA issued a policy to VAMCs for recording estimated costs for inpatient and outpatient CIC authorizations in FBCS. This policy, among other things, stipulates that VAMCs are to base estimated costs on historical cost data provided by VA. These data, which represent average historical costs for a range of procedures, are intended to help improve the accuracy of VAMCs’ cost estimates. To help implement this policy, in December 2015 VA updated its FBCS software so that the system automatically generates estimated costs for CIC authorizations based on historical CIC claims data. As a result, in many cases, VAMC staff will no longer need to individually estimate costs using various methods and manually record these estimates in FBCS. Officials we interviewed at six selected VISNs shortly after the implementation of the software update told us that the update sometimes produces inaccurate cost estimates or no cost estimates at all. VA officials told us that the problems affecting the software update were largely due to VA’s adoption of a revised medical classification system in October 2015. The change in the classification system meant that there were relatively few paid claims with the new codes to inform FBCS’s automated cost estimates for CIC services. VA officials told us they anticipate this problem diminishing throughout fiscal year 2016 as more CIC claims using the new codes are paid and as the amount of data used to inform the cost estimates increase. Second, in November 2015, VA issued a policy requiring VAMCs to systematically review and correct potentially inaccurate estimated costs for CIC authorizations recorded in FBCS, a step which was previously not required. VA officials told us this policy was created to detect and correct obvious errors in the cost estimates, such as data entry errors that fall outside of the range of reasonable cost estimates. Additionally, this policy requires VISNs to certify monthly to VA’s Chief Business Office that the appropriate review and corrective actions have been completed. We found that all six VISNs certified that they had implemented this policy. Third, in November 2015, VA issued a policy requiring VAMCs to identify any discrepancies between the estimated costs for CIC authorizations recorded in FBCS and the amount of estimated obligations recorded in FMS. VA’s policy also requires VAMCs to correct discrepancies they identify—such as increasing unreasonably low estimated obligations to make the estimates more accurate—and document the corrections they make. This policy also requires VISNs to certify monthly to VA’s Chief Business Office that the appropriate review and corrective actions have been taken and appropriately documented. As we previously stated, in part because FBCS is not fully integrated with FMS, VA officials concluded this policy was necessary to detect and address discrepancies between the two systems. According to VA officials, if estimated costs for CIC authorizations recorded in FBCS are higher than estimated obligations recorded in FMS, it may leave VA at risk of potentially being unable to pay for authorized care. Alternatively, if estimated costs for CIC authorizations recorded in FBCS are lower than estimated obligations recorded in FMS, VA may be dedicating more resources than needed for this care. While we found that all six selected VISNs and the VAMCs they manage certified that they had implemented this new policy, the methods used to identify and correct discrepancies between estimated costs for CIC authorizations in FBCS and the amount of estimated obligations in FMS varied. Moreover, in some cases, we found that discrepancies VAMCs identified and associated corrections were not documented or that documentation lacked specificity, making it difficult to determine whether appropriate corrections were made. To achieve greater consistency in how VAMCs implement this new policy, VA officials reviewed VAMCs’ reports and in February 2016, provided VISNs and VAMCs with additional guidance and best practices for identifying discrepancies and documenting corrections. For example, VA instructed VAMCs to be as specific as possible in documenting corrections they make to the estimated obligations. VA officials also told us that they are developing additional guidance that would define an acceptable level of variation between estimated costs for CIC authorizations and the amount of estimated obligations in FMS. This guidance, once implemented, would require that VAMCs ensure that estimated costs and estimated obligations were no more than $50,000 or 10 percent apart, whichever is less. Finally, to better track that VAMCs’ obligations for CIC do not exceed available budgetary resources for fiscal year 2016, VA allocated funds specifically for CIC to each VAMC. VA officials, including some VISN officials we interviewed, told us that they identify VAMCs that may be at risk for exhausting their funds before the end of the fiscal year by reviewing monthly reports comparing each VAMC’s obligations for CIC to the amount of funds allocated for that purpose to the VAMC. Officials from the Office of Finance within the Veterans Health Administration told us that once a VAMC had obligated all of its CIC funds, it would have to request realignment of funds from other VA programs, assuming additional funds could be made available. VA would, in turn, evaluate the validity of a VAMC’s request. VA is employing a similar process to track VAMCs’ use of funds for hepatitis C drugs. Officials told us that these steps are intended to reduce the risk of VAMCs obligating more funds than VA’s budgetary resources allow. Despite these efforts, VA still faces challenges accurately estimating CIC costs and tracking associated obligations, in large part because of the uncertainty inherent in predicting the CIC services veterans will actually receive. According to VA Chief Business Office and VISN officials, a single authorization may allow for multiple episodes of care, such as up to 10 visits to a physical therapist. Alternatively, a veteran may choose not to seek the care that was authorized. Furthermore, system deficiencies also complicate both the development of accurate CIC cost estimates and the tracking of related obligations. Chief Business Office and VISN officials told us that due to systems limitations, cost estimates for inpatient CIC authorizations are estimated in FBCS based on a veteran’s diagnosis at the time the care is authorized and cannot be adjusted if a veteran’s diagnosis—and associated treatment plan—changes. For example, a veteran may be authorized to obtain inpatient care to treat fatigue and nausea, but may be subsequently diagnosed as having a heart attack and receive costly surgery that was not included in the cost estimate. Chief Business Office officials told us that while the cost estimate cannot be adjusted in FBCS, VAMC officials should adjust the estimated obligation that corresponds to the authorization in IFCAP to reflect the cost difference; they should also document why they made the adjustment. To better align cost estimates for CIC authorizations with associated obligations, in the long term, VA officials told us that VA is exploring options for replacing IFCAP and FMS, which officials describe as antiquated systems based on outdated technology. The department has developed a rough timeline and estimate of budgetary needs to make these changes. Officials told us that the timeline and cost estimate would be refined once concrete plans for replacing IFCAP and FMS are developed. Officials told us that replacing IFCAP and FMS is challenging due to the scope of the project and the requirement that the replacement system interface with various VA legacy systems, such as the Veterans Health Information Systems and Technology Architecture, VA’s system containing veterans’ electronic health records. Moreover, as we have previously reported, VA has made previous attempts to update IFCAP and FMS that were unsuccessful. In October 2009, we reported that these failures could be attributed to the lack of a reliable implementation schedule and cost estimates, among other factors. To more accurately project future health care utilization of VA services given the implementation of the Veterans Choice Program, in November 2015 VA took steps to update its EHCPM projection to better inform future budget estimates. Officials told us that the updated EHCPM projection in November 2015 included available data from fiscal year 2015 to inform the department’s budget estimate for fiscal years 2017 and 2018. Without the updated projection, VA would have relied on the EHCPM projection from April 2015 using actual data from fiscal year 2014. The updated EHCPM projection using fiscal year 2015 data showed increased utilization of CIC services in that year. According to VA officials, this increase was an unexpected result of implementing the Veterans Choice Program. Specifically, because of administrative weaknesses affecting the Veterans Choice Program, veterans seeking services through this program were generally provided care through other VA CIC programs instead. Additionally, according to VA, analysis of fiscal year 2015 data showed that the implementation of the Veterans Choice Program resulted in veterans relying on VA services rather than on services provided by other health care benefit programs for a greater share of their health care needs. VA officials told us that they plan to continue relying on the EHCPM projection from April of each year using data from the most recently completed fiscal year and updating the EHCPM later in the year using more current data. As we have previously reported, while the EHCPM projection informs most of VA’s budget estimate, the amount of the estimate is determined by several factors, including VA policy decisions and the President’s priorities, and will not necessarily match the EHCPM projection in any given year. Historically, the final budget estimate for VA has consistently been lower than the amount projected by the EHCPM. For example, in December 2015, to develop the budget estimates for fiscal year 2017 and advance appropriations for fiscal year 2018, VA officials made a policy decision to use a previous EHCPM projection that does not take into account the increased utilization of CIC services by veterans in fiscal year 2015. VA officials told us that if demand for VA services exceeds the amount requested for VA’s Medical Services Account in the President’s budget request for fiscal year 2017, the difference can be made up by greater utilization of the Veterans Choice Program. VA officials also told us that VA will likely request an increase in funding for health care services in the President’s budget request for fiscal year 2018, which is expected to be submitted to Congress in February 2017. To help increase utilization of the Veterans Choice Program, VA issued policy memoranda to VAMCs in May and October 2015, requiring them to refer veterans to the Veterans Choice Program if timely care cannot be delivered by a VAMC, rather than authorizing care through VA’s other CIC programs. In addition, on July 31, 2015, the VA Budget and Choice Improvement Act eliminated the requirement that veterans must be enrolled in the VA health care system by August 2014 in order to receive care through the program. While data from January 2016 indicate that utilization of care under the Veterans Choice Program has begun to increase, VA officials, including at the VISNs we interviewed, expressed concerns whether existing contracts were sufficient to address veterans’ needs in a timely manner. For example, officials we interviewed from five of the six selected VISNs cited inadequate provider networks, delays in scheduling appointments, and delays in providers receiving payment for services delivered, as factors limiting program utilization. To address these concerns, VA is granting VAMCs the authority to establish agreements directly with providers to deliver services through the Veterans Choice Program and schedule appointments for veterans if VA’s contractors are unable to schedule them in a timely manner. These efforts have the potential to increase Veterans Choice Program utilization beyond the levels VA estimated for fiscal year 2016, which, according to VA officials, may limit the funds available to the program in fiscal year 2017. Conversely, some of these officials told us that if VA does not succeed in increasing Veterans Choice Program utilization in fiscal years 2016 and 2017, veterans may have to seek care through other CIC programs, which may not have the funds available to meet the demand for services. In either case, according to VA officials, veterans may face delays in accessing VA health care services. In addition to the challenges associated with the Veterans Choice Program, VA, like other health care payers, faces uncertainties estimating the utilization—and associated costs—of emerging health care treatments—such as costly drugs to treat chronic diseases affecting veterans. VA, like other federal agencies, prepares its budget estimate 18 months in advance of the start of the fiscal year for which funds are provided. At the time VA develops its budget estimate, it may not have enough information to estimate the likely utilization and costs for health care services or these treatments with reasonable accuracy. Moreover, even with improvements to its projection, VA, like other federal agencies, must make tradeoffs in formulating its budget estimate that requires it to balance the expected demand for health care services against other competing priorities. Close scrutiny and careful monitoring in all these areas should assist VA in managing its available resources and better protect against a reoccurrence of budgetary circumstances similar to those that existed in fiscal year 2015. VA provided written comments on a draft of this report, which we have reprinted in appendix I. While we are not making any recommendations in this report, in its comments, VA agreed with our findings and reiterated the uncertainty the department faces in estimating the cost of emerging health care treatments. VA also provided technical comments on the draft report, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of Veterans Affairs. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Rashmi Agarwal, Assistant Director; Luke Baron; Krister Friday; Jacquelyn Hamilton; and Michael Zose made key contributions to this report. VA’s Health Care Budget: Preliminary Observations on Efforts to Improve Tracking of Obligations and Projected Utilization. GAO-16-374T. Washington, D.C.: February 10, 2016. Veterans’ Health Care Budget: Improvements Made, but Additional Actions Needed to Address Problems Related to Estimates Supporting President’s Request. GAO-13-715. Washington, D.C.: August 8, 2013. Veterans’ Health Care: Improvements Needed to Ensure That Budget Estimates Are Reliable and That Spending for Facility Maintenance Is Consistent with Priorities. GAO-13-220. Washington, D.C.: February 22, 2013. Veterans’ Health Care Budget: Better Labeling of Services and More Detailed Information Could Improve the Congressional Budget Justification. GAO-12-908. Washington, D.C.: September 18, 2012. Veterans’ Health Care Budget: Transparency and Reliability of Some Estimates Supporting President’s Request Could Be Improved. GAO-12- 689. Washington, D.C.: June 11, 2012. VA Health Care: Estimates of Available Budget Resources Compared with Actual Amounts. GAO-12-383R. Washington, D.C.: March 30, 2012. VA Health Care: Methodology for Estimating and Process for Tracking Savings Need Improvement. GAO-12-305. Washington, D.C.: February 27, 2012. Veterans Affairs: Issues Related to Real Property Realignment and Future Health Care Costs. GAO-11-877T. Washington, D.C.: July 27, 2011. Veterans’ Health Care Budget Estimate: Changes Were Made in Developing the President’s Budget Request for Fiscal Years 2012 and 2013. GAO-11-622. Washington, D.C.: June 14, 2011. Veterans’ Health Care: VA Uses a Projection Model to Develop Most of Its Health Care Budget Estimate to Inform the President’s Budget Request. GAO-11-205. Washington, D.C.: January 31, 2011. | VA projected a funding gap of about $3 billion in its fiscal year 2015 medical services appropriation account, which funds VA health care services except for those authorized under the Veterans Choice Program. To close this gap, VA obtained temporary authority to use up to $3.3 billion from the $10 billion appropriated to the Veterans Choice Fund in August 2014. GAO was asked to examine VA's fiscal year 2015 projected funding gap and any changes VA has made to prevent potential funding gaps in future years. This report examines (1) the activities or programs that accounted for VA's fiscal year 2015 projected funding gap in its medical services appropriation account and (2) changes VA has made to prevent potential funding gaps in future years. GAO reviewed VA obligations data and related documents to determine what activities accounted for the projected funding gap in its fiscal year 2015 medical services appropriation account, as well as the factors that contributed to the projected funding gap. GAO interviewed VA officials to identify the steps taken to address the projected funding gap. GAO also examined changes VA made to prevent future funding gaps and reviewed the implementation of these changes at the VAMCs within six VISNs, selected based on geographic diversity. GAO found that two areas accounted for the Department of Veterans Affairs' (VA) fiscal year 2015 projected funding gap of $2.75 billion. Higher-than-expected obligations for VA's longstanding care in the community (CIC) programs—which allow veterans to obtain care from non-VA providers—accounted for $2.34 billion or 85 percent of VA's projected funding gap. VA officials expected that the Veterans Choice Program—which is a relatively new CIC program implemented in fiscal year 2015 that allows veterans to access care from non-VA providers under certain conditions—would absorb veterans' increased demand for more timely care after public disclosure of long wait times. However, administrative weaknesses slowed enrollment into this program, and use of the Veterans' Choice Fund was far less than expected. Moreover, as utilization of CIC programs overall increased, VA's weaknesses in estimating costs and tracking obligations for CIC services resulted in VA facing a projected funding gap. Unanticipated obligations for hepatitis C drugs accounted for the remaining $408 million of VA's projected funding gap. VA did not anticipate in its budget the obligations for these costly, new drugs because the drugs did not gain approval from the Food and Drug Administration until fiscal year 2014—after VA had already developed its budget estimate for fiscal year 2015. To help prevent future funding gaps, VA has made efforts to better estimate costs and track obligations for CIC services and better project future utilization of VA's health care services. Specifically, VA implemented new policies directing VA medical centers (VAMC) and Veterans Integrated Service Networks (VISN) to better estimate costs for CIC authorizations—by using historical data and correcting for obvious errors—and to better track CIC obligations by comparing estimated costs with estimated obligations, correcting discrepancies, and certifying each month that these steps were completed. These policies are necessary, in part, because deficiencies in VA's financial systems make tracking obligations challenging. The VISNs and associated VAMCs GAO reviewed have implemented these policies. VA also allocated funds to each VAMC for CIC and hepatitis C drugs and began comparing VAMCs' obligations in these areas to the amount of funds allocated to help ensure that obligations do not exceed budgetary resources. VA updated the projection it uses to inform budget estimates 3 to 4 years in the future, adding fiscal year 2015 data reflecting increased CIC utilization. While VA has made these efforts to better manage its budget, uncertainties remain regarding utilization of VA's health care services. For example, utilization of the Veterans Choice Program in fiscal years 2016 and 2017 is uncertain because of continued enrollment delays affecting the program. Moreover, even with improvements to its projection, VA, like other federal agencies, must make tradeoffs in formulating its budget estimate that requires it to balance the expected demand for health care services against other competing priorities. GAO is not making any recommendations. After reviewing a draft of this report, VA agreed with what GAO found. |
As part of its mission to enforce the law and defend the interests of the United States, DOJ undertakes a number of law enforcement activities through its component agencies. The following six reports—which we issued in 2015 and 2016—contain key findings and recommendations in this area, and highlight potential areas for continued oversight. Collectively, the reports resulted in 28 recommendations to DOJ; the Drug Enforcement Administration (DEA); the Federal Bureau of Investigation (FBI); the National Institute of Justice (NIJ); the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF); and other DOJ components. As of March 2017, DOJ and its component agencies have implemented 5 of the 28 recommendations. DOJ and its components have also begun taking actions to address 11 of the remaining recommendations, which remain open. DOJ or its components have not taken actions for 8 of our recommendations and disagreed with the remaining 4 recommendations. DOJ and ATF have not always complied with federal law and ATF firearm-related policies. In June 2016, we reported that ATF did not always comply with the appropriations act restriction prohibiting consolidation or centralization of federal firearm licensee records, and did not consistently adhere to ATF policies. To carry out its enforcement responsibilities, ATF maintains 25 firearm-related databases, 16 of which contain firearms purchaser information from a federal firearm licensee. While ATF has the statutory authority and responsibility to obtain firearms transactions records from federal firearms licensees under certain circumstances, ATF is restricted from using appropriated funds to consolidate or centralize federal firearm licensee records. We examined four federal firearm licensee databases—selected based on factors such as the inclusion of retail purchaser information and original data—and found that two of the four did not always comply with ATF appropriations act restrictions and two of the four did not adhere to ATF policies. ATF addressed the violations of the appropriations act restrictions during the course of our review. To address identified policy deficiencies, we made three recommendations that ATF provide guidance to federal firearm licensees, align system capabilities with ATF policies, and align timing and ATF policy for deleting records related to multiple firearm sales. ATF concurred with the recommendations. As of March 2017, ATF has implemented one recommendation and reported progress towards implementing the other two recommendations by improving practices and modifying data systems to better align with ATF policy. DOJ should study options for reducing overlap and fragmentation on missing persons databases. In June 2016, we reported that DOJ could facilitate more efficient sharing of information on missing persons and unidentified remains. The FBI’s National Crime Information Center database includes criminal justice agency information and is restricted to authorized users. DOJ’s NIJ oversees the National Missing and Unidentified Persons System, a database open to the public for access to published case information. We found that data contained in these systems were overlapping and fragmented, creating the risk of duplication. Because there is no mechanism to share information between the systems, users relying on only one system may miss information that could be instrumental in solving these types of cases. Although federal law precludes full integration, there may still be opportunities to share information between the systems, which could reduce overlap and fragmentation of data on missing and unidentified persons. To allow for more efficient use of missing and unidentified persons information, we recommended that the FBI and NIJ evaluate options to share information between the two systems. DOJ disagreed with our recommendation, citing that it lacks legal authority. In March 2017, DOJ reiterated its position that any such sharing was prohibited by the law. Specifically, DOJ stated that the FBI’s system can only share information with authorized users, dissemination is limited to those individuals performing law enforcement, and that additional efforts to examine other options would waste taxpayer funds. We continue to believe that our recommendation is valid and that DOJ should further study options for sharing information within the confines of its legal framework. For example, our work identified a variety of solutions to address the fragmentation and overlap between the two systems such as developing a notification alert for the FBI’s system when related case data was also present in the other system. DOJ and the FBI have not addressed privacy and accuracy concerns related to the FBI’s use of face recognition technology. Whenever agencies develop or change technologies that collect personal information, federal law requires them to publish certain privacy impact statements. In May 2016, we reported that the FBI did not publish updated privacy impact assessments (PIA) and a System of Records Notice (SORN) for a face recognition service that allows law enforcement agencies to search a database of over 30 million photos to support criminal investigations. Users of this service include the FBI and selected state and local law enforcement agencies, which can submit search requests to help identify an unknown person using, for example, a photo from a surveillance camera. DOJ issued an initial PIA in 2008, before the FBI and state and local law enforcement agencies began using this service on a pilot basis. However, the FBI did not update the PIA until September 2015, during the course of our review and after the system underwent significant changes. Further, although the FBI, state, and local law enforcement agencies had been using the system since 2011; DOJ did not publish a SORN until May 2016, after completion of our review. Similarly, DOJ did not publish a PIA for the FBI’s internal use of additional face recognition technologies until May 2015, during the course of our review and almost 4 years after the FBI began its new use of face recognition searches. In addition, we found that the FBI had not audited the actual use of face recognition technology and, as a result, could not demonstrate compliance with applicable privacy protection requirements. We also reported that the FBI had conducted limited testing to evaluate the detection rate of the face recognition searches, but had not (1) assessed how often errors occurred or (2) taken steps to determine whether systems used by external partners are sufficiently accurate for the FBI’s use. By taking steps to evaluate the detection rates of the various systems, the FBI could better ensure the data received were sufficiently accurate and do not include photos of innocent people as investigative leads. We made three recommendations to DOJ and the FBI to determine why privacy-related documents were not published as required and to audit the use of the face recognition technology to better ensure face image searches are conducted in accordance with policy requirements. We made three additional recommendations to the FBI to verify that the systems are sufficiently accurate and are meeting users’ needs. DOJ and the FBI partially agreed with two recommendations and disagreed with one recommendation concerning privacy. The FBI agreed with one recommendation and disagreed with two recommendations concerning accuracy. In response, we clarified one recommendation regarding accuracy testing and updated another regarding the SORN development process, based on information DOJ provided after reviewing our draft report. As of March 2017, DOJ has begun taking actions to address three of our six recommendations, such as initiating audits to oversee the FBI’s use of its face recognition technology. DEA should better administer the controlled substance quota setting process. In February 2015, we found that DEA had not effectively administered the quota setting process that limits the amount of certain controlled substances available for use in the United States. Each year, manufacturers apply to DEA for quotas needed to make drugs. We found that DEA did not respond within the time frames required by its regulations for any year from 2001 through 2014, which, according to some manufacturers, caused or exacerbated shortages of drugs. We recommended that DEA take seven actions to improve its management of the quota setting process and address drug shortages. DEA concurred, and as of March 2017 has implemented four of the seven recommendations, one related to establishing an agreement to facilitate information sharing with the Food and Drug Administration regarding drug shortages and the three others related to strengthening internal controls in the quota setting process. DEA has also taken some actions towards addressing the remaining three recommendations—including working with the Food and Drug Administration to establish a work plan to specifically outline the information the agencies will share and the time frames for doing so—but needs to take additional actions to fully implement them. DEA needs to provide additional guidance to entities that handle controlled substances. In June 2015, based on four nationally representative surveys of DEA registrants—distributors of controlled substances, individual pharmacies, chain pharmacy corporate offices, and practitioners—we reported that many registrants were not aware of various DEA resources, such as manuals for pharmacists and practitioners. In addition, some distributors, individual pharmacies, and chain pharmacy corporate offices wanted improved guidance from, and additional communication with, DEA about their roles and responsibilities under the Controlled Substances Act. We recommended that DEA take three actions to increase registrants’ awareness of DEA resources and improve the information DEA provides to registrants. DEA concurred and, as of March 2017, has taken some actions towards addressing our three recommendations, such as conducting and participating in conferences and other industry outreach events. However, DEA needs to take additional actions to fully implement the recommendations, including establishing a means of regular communication with registrants, such as through listservs, which would reach a larger proportion of registrants than conferences and other events. DOJ should improve handling of FBI whistleblower retaliation complaints. In January 2015, we reported that unlike employees in other executive branch agencies, FBI employees did not have a process to seek corrective action if they experienced retaliation in certain circumstances. Specifically, FBI employees could not seek corrective action if they experienced retaliation based on a disclosure of wrongdoing to their supervisors or others in their chain of command who were not designated DOJ or FBI officials. We suggested that Congress consider whether FBI employees should have a means to obtain corrective action for retaliation for disclosures of wrongdoing made to supervisors and others in the employee’s chains of command. In response to our report, in December 2016, Congress passed and the President signed the FBI Whistleblower Protection Enhancement Act of 2016, which, among other things, provides a means for FBI employees to obtain corrective action in these cases and brings FBI whistleblower protection in line with the protection in place for employees of other executive branch agencies for reporting wrongdoing to their chain of command. This change will help ensure that whistleblowers have access to recourse, that retaliatory action does not go unpunished, and that other potential whistleblowers are encouraged to come forward. We also reported that (1) DOJ and FBI guidance for making a protected disclosure was not always clear; (2) DOJ did not provide whistleblower retaliation complainants with estimates of when to expect DOJ decisions throughout the complaint process; (3) DOJ offices responsible for investigating complaints have not consistently complied with certain regulatory requirements, such as obtaining complainants’ approvals for extensions of time; and (4) although DOJ officials have ongoing and planned efforts to reduce the duration of retaliation complaints, they have limited plans to assess the impacts of these actions. To address these deficiencies, we made eight recommendations that DOJ clarify guidance, provide complainants with estimated complaint decision time frames, develop an oversight mechanism to monitor regulatory compliance, and assess the impact of efforts to reduce the duration of FBI whistleblower complaints. DOJ concurred with these recommendations, but as of March 2017 has not provided documentation of actions taken to address them. As part of their mission to enforce and control crime, DOJ and its components—including the Bureau of Prisons (BOP) and the U.S. Marshals Service (USMS)—are responsible for the custody and care of federal prisoners and inmates. To carry out these responsibilities, the President’s Budget requested $8.8 billion for fiscal year 2017. Our recent reports on DOJ’s programs for incarceration and offender management highlight areas for oversight, including better estimating costs and measuring outcomes. Since August 2014, we have made 17 recommendations to DOJ, BOP, and USMS to improve the custody and care of federal prisoners and inmates. As of March 2017, DOJ or its component agencies have implemented 7 of the 17 recommendations, have begun taking actions on 8 recommendations that remain open, and have not taken actions for the remaining 2 recommendations. DOJ could better assess federal incarceration initiatives. In June 2015, we reported that DOJ could better measure the efficacy of three key new initiatives designed to address federal incarceration challenges, such as overcrowding and rising costs. We found that the Smart on Crime Initiative indicators were well-linked to overall goals, which includes prioritizing prosecution of the most serious offenses, but many lacked clarity and context. The Clemency Initiative, which encourages certain inmates to petition to have sentences reduced by the President, does not track how long it takes for the average petition to clear each step in the review process. In addition, BOP created the Reentry Services Division in 2014 to improve inmate reentry into society, but we found that it lacked a plan to prioritize evaluations among all 18 of the programs it lists in its national reentry directory. To address these deficiencies, we made three recommendations to improve measurement of the initiatives. DOJ concurred with two of the recommendations and partially concurred with the third. In May 2016, BOP finalized an updated evaluation plan for the Reentry Services Division that was consistent with our recommendation and we consider that recommendation to be implemented. As of March 2017, DOJ has not provided documentation of actions on the remaining two recommendations. DOJ and BOP could better measure the outcomes of alternatives to incarceration. In June 2016, we reported that in part to help reduce the size and costs of the federal prison population, DOJ has used a variety of alternatives to incarceration before sentencing, but it does not reliably track the use of some of these alternatives. For instance, we reported that DOJ has used two types of pretrial diversion as alternatives to incarceration—one at the discretion of the U.S. Attorney’s Office and the other involving additional stakeholders, such as judges and defense counsel. However, DOJ data on the use of pretrial diversion are unreliable because DOJ’s database does not distinguish between these different types of pretrial diversions and DOJ does not have guidance in place to ensure that its attorneys consistently enter the use of pretrial diversion into the database. In addition, over the past 7 years, BOP has increased its use of incarceration alternatives, such as the placement of inmates in residential reentry centers (also known as halfway houses) and home confinement. However, we found that while BOP has tracked data on the cost implications of using these alternatives, it does not track the information needed to help measure the outcomes of incarceration alternatives. Similarly, we found that DOJ has not measured the outcomes or identified the cost implications of pretrial diversion programs. To address these deficiencies, we made six recommendations that DOJ enhance its tracking of data on the use of pretrial diversions and that DOJ and BOP obtain outcome data and develop measures for alternatives used. DOJ concurred and, as of March 2017, has fully implemented the two recommendations on tracking data by revising its system to separately track the different types of pretrial diversion programs and providing guidance to its attorneys on the appropriate way to enter data. DOJ and BOP have partially addressed the remaining four recommendations. BOP faces challenges in activating new prisons. In August 2014, we found that BOP was behind schedule in activating six new prison institutions designed to handle the projected growth of the federal inmate population, and that BOP did not have a policy or best practices to guide the activations or activation schedules. Activation of the prisons—the process by which BOP prepares institutions for inmates—was delayed, in part, because of schedule challenges, such as staffing, posed by locations of the new institutions. We also found that BOP did not effectively communicate to Congress on how the locations of the new institutions may affect activation schedules. To address these deficiencies, we recommended that (1) DOJ use its annual budget justification to communicate to Congress factors that might delay prison activation; (2) BOP analyze institution-level staffing data and develop effective, tailored strategies to mitigate staffing challenges; (3) BOP develop and implement a comprehensive activation policy; and (4) BOP develop and implement an activation schedule that reflects best practices. DOJ and BOP concurred, and as of March 2017 have implemented two of the four recommendations by enhancing recruitment approaches to address staffing challenges and developing a policy to guide future activations. Additional actions are needed to address the remaining two recommendations. U.S. Marshals Service could better estimate cost savings and monitor ways to achieve efficiencies. In May 2016, we found that the U.S. Marshals Service’s largest prisoner costs were housing payments to state, local, and private prisons. For example, in fiscal year 2015, USMS spent approximately $1.2 billion on these costs. USMS has implemented actions that it reports have saved prisoner-related costs from fiscal years 2010 through 2015, which include automating detention management services, developing cost-saving housing options, investing in alternatives to pre-trial detention to reduce housing and medical expenditures, and improving medical claim management. For actions with identified savings over this time period, however, we found that about $654 million of the USMS’s estimated $858 million in total savings was not reliable because the estimates were not sufficiently comprehensive, accurate, consistent, or well-documented. For example, USMS identified $375 million in savings from the alternatives to pre-trial detention program for fiscal years 2010 through 2015, but did not verify the data or methodology used to develop the estimate or provide documentation supporting its reported savings for fiscal years 2012 onward. We also found that USMS has designed systems to identify opportunities for cost efficiencies, including savings. For example, the agency requires districts to conduct annual self-assessments of their procedures to identify any deficiencies that could lead to cost savings. However, USMS cannot aggregate and analyze the results of the assessments across districts. To address these deficiencies, we recommended that USMS (1) develop reliable methods for estimating cost savings and validating reported savings achieved, and (2) establish a mechanism to aggregate and analyze the results of annual district self-assessments. USMS concurred, and as of March 2017 has provided us with information on how it plans to move forward in addressing the recommendations, but needs to take additional actions to fully implement them. DOJ has improved outreach to states to notify tribes about registered sex offenders who plan to live, work, or attend school on tribal land. In November 2014, we found that most eligible tribes have retained their implementation authority, and have either substantially implemented or were in the process of implementing the Sex Offender Registration and Notification Act (SORNA). In our survey of tribes that retained their authority to implement the act, the four most frequently reported implementation challenges were the inability to submit convicted sex offender information to federal databases, lack of notification from state prisons upon the release of sex offenders, lack of staff, and inability to cover the costs of SORNA implementation. SORNA established the Office of Sex Offender Sentencing, Monitoring, Apprehending, Registering, and Tracking (SMART Office) within DOJ to administer and assist jurisdictions with implementing the law. However, we found that some states had not notified tribes when sex offenders who will be or have been released from state prison register with the state and indicate that they intend to live, work, or attend school on tribal land, as SORNA requires. We found that the SMART Office has taken some actions, but could do more to encourage states to provide notification to tribes. To address this deficiency, we made two recommendations to DOJ related to the SMART Office encouraging states to notify tribes about offenders who plan to live, work, or attend school on tribal land upon release from prison. DOJ concurred with these recommendations and has fully implemented them. DOJ supports a range of activities—including policing and victims’ assistance—through grants provided to federal, state, local, and tribal agencies, as well as national, community-based, and non-profit organizations. Congress appropriated $2.4 billion for DOJ discretionary grant programs in fiscal year 2016. The Office of Justice Programs (OJP) is the largest of DOJ’s three granting components and operated with an enacted discretionary budget of approximately $1.8 billion in fiscal year 2016. The four reports discussed below highlight DOJ’s overall grant administration practices, management of specific programs, and efforts to reduce duplication in grant programs across the federal government. The four reports included 17 recommendations to DOJ. The department concurred with these recommendations, and as of March 2017 had taken actions to fully implement 15 of the 17 recommendations. DOJ has also begun taking actions on the remaining 2 recommendations, which are still open. DOJ has addressed recommendations to reduce the risk of grant program overlap and unnecessary duplication. In July 2012, we found that DOJ had not assessed its grant programs department-wide to identify overlap, which occurs when multiple agencies or programs have similar goals, engage in similar activities or strategies to achieve them, or target similar beneficiaries. We reported that DOJ published 253 fiscal year 2010 grant solicitations to support crime prevention, law enforcement, and crime victim services. We also found that DOJ did not routinely coordinate grant awards to avoid unnecessary duplication, which occurs when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries without being knowledgeable about each other’s efforts. Further, we reported that DOJ could take steps to better assess the results of all the grant programs it administers. As result, we made eight recommendations to DOJ. The department concurred with these recommendations and has fully implemented them. DOJ has addressed recommendations for improving management of the bulletproof vest partnership. In February 2012, we found that DOJ’s Bureau of Justice Assistance—within the Office of Justice Programs—could enhance grant management controls and better ensure consistency in management of the bulletproof vest partnership grant program. For example, we found that DOJ could better manage the grant program by improving grantee accountability in the use of funds for body armor purchases, reducing the risk of grantee noncompliance with program requirements, and ensuring consistency across its efforts to promote law enforcement officer safety. We made five recommendations to DOJ’s Bureau of Justice Assistance. The department concurred with these recommendations and has fully implemented them. DOJ could better manage the Victims of Child Abuse Act grant program. In April 2015, we found that OJP’s Office of Juvenile Justice and Delinquency Prevention (OJJDP) had several administrative review and approval processes in place that contributed to delays in grantees’ ability to begin spending their award funds. For example, for the 28 Victims of Child Abuse Act (VOCA) program grants awarded from fiscal years 2010 through 2013, grantees had expended less than 20 percent, on average, of each grant they received during the original 12-month project period. In particular, we found that OJJDP’s processes for reviewing grantees’ budgets and conference planning requests were contributing to delays in grantees’ ability to begin spending their funds. Further, we found that OJJDP’s guidance on grant extensions was unclear and irregularly enforced. For example, OJJDP approved 72 of 73 extension requests from fiscal years 2010 through 2013 without the required narrative justification. We also found that OJJDP did not have complete data to assess VOCA grantees’ performance against the measures it had established because the tools it used to collect this information did not align to the measures themselves. As a result, we made four recommendations to OJP and the office concurred. As of March 2017, OJP has implemented two recommendations by establishing and enforcing clear guidance related to grant extensions and enhancing its performance management capacity. DOJ has partially taken action to address the remaining two recommendations. DOJ and other federal agencies have taken steps to avoid duplication among human trafficking grants. In June 2016, we identified 42 grant programs with awards made in 2014 and 2015 that may be used to combat human trafficking or assist victims of human trafficking, 15 of which are intended solely for these purposes. Although some overlap exists among these human trafficking grant programs, federal agencies have established processes to help prevent unnecessary duplication. For instance, in response to recommendations in a prior GAO report, DOJ requires grant applicants to identify any federal grants they are currently operating under as well as federal grants for which they have applied. In addition, agencies that participate in the grantmaking committee of the Senior Policy Operating Group—an entity through which federal agencies coordinate their efforts to combat human trafficking—are to share grant solicitations and information on proposed grant awards, allowing other agencies to comment on proposed awards and determine whether they plan to award funding to the same organization. DOJ has the ability to fund programs using money it collects through alternative sources, such as fines, fees, and penalties, in addition to the budget authority Congress provides DOJ through annual appropriations. For example, the Crime Victims Fund, which is financed by collections of fines, penalties, and bond forfeitures from defendants convicted of federal crimes, obligated almost $2.4 billion for a variety of grants and programs to assist victims of crimes in fiscal year 2015. The following three reports highlight DOJ’s collection, use, and management of these funds. One of the three reports contains three recommendations, which have been partially implemented. DOJ could better manage alternative sources of funding. In February 2015, we reported that DOJ could better manage its alternative sources of funding—collections by DOJ from sources such as fines, fees, and penalties—which, in fiscal year 2013, made up about 15 percent of DOJ’s total budgetary resources. Specifically, DOJ collected about $4.3 billion from seven major alternative sources of funding—including the Assets Forfeiture Fund, the Crime Victims Fund, and non-criminal fingerprint checks fees, among others. We found that two of these funds could be better managed. For example, DOJ has the authority to deposit up to 3 percent of amounts collected from DOJ’s civil debt collection litigation activities, such as Medicare fraud cases and referred student loan collections, into the Three Percent Fund. Collections are used to defray DOJ’s costs for conducting these activities. However, the department had not conducted analyses of the fund that include elements such as projected collections or the impact of previous obligations rates on unobligated balances. In addition, the FBI’s Criminal Justice Information Services collects fees for providing non-criminal justice fingerprint-based background checks. We found that the FBI was not transparent in how it sets its fees, and did not evaluate the appropriate range of carryover amounts for a portion of those fees, even though unobligated balances had been growing. As a result, we recommended that (1) DOJ develop a policy to analyze unobligated balances and develop collection estimates for the Three Percent Fund; (2) the FBI publish a breakdown of how it assesses its fingerprint check fees to better communicate the cost of the service to users; and (3) the FBI develop a policy to analyze and determine an appropriate range for unobligated balances from a portion of those fees. DOJ partially concurred with the first recommendation and generally concurred with the other two recommendations. As of March 2017, DOJ is working to improve how it analyzes unobligated funds needed for future fiscal years for the Three Percent Fund; however, it provided various reasons why it does not calculate revenue estimates. Our report recognized DOJ’s concerns and we continue to believe that DOJ could develop an estimated range of potential collections based on historical trends and current collection activities. The FBI has partially implemented our recommendations to be more transparent with its fees and improve how it analyzes unobligated balances from a portion of the fingerprint checks fees. DOJ distributes fines, penalties, and forfeitures from financial institutions to support program expenses and victims of related crimes. In March 2016, we reported that since 2009, the federal government had assessed financial institutions about $12 billion in fines, penalties, and forfeitures for violations of the Bank Secrecy Act’s anti- money-laundering regulations, Foreign Corrupt Practices Act of 1977, and U.S. sanctions programs requirements. Of this amount, about $3.2 billion was deposited into DOJ’s Assets Forfeiture Fund (AFF). Funds from the AFF are primarily used for program expenses, payments to victims of the related crimes, and payments to law enforcement agencies that participated in the efforts resulting in forfeitures. For example, as of December 2015, approximately $2 billion of forfeited funds deposited in the AFF was planned for distribution to victims of fraud. DOJ retained a portion of selected mortgage-related financial institution settlement payments for its Three Percent Fund. In November 2016, we reported that federal agencies have collected billions of dollars in settlement payments and penalties from financial institutions for violations alleged to have been committed during the mortgage origination process, servicing of mortgages, and in the packaging and sale of residential mortgage-backed securities. Several federal agencies have responsibility for regulating financial institutions in relation to these activities, and these agencies may engage DOJ to pursue investigations of financial institutions and individuals for civil or criminal violations of various laws and regulations. We reviewed a sample of nine cases where federal agencies, in some instances including DOJ, either reached settlements with or assessed penalties against financial institutions in connection with alleged mortgage-related violations. Financial institutions in these nine cases were assessed a total of about $25 billion, generally in penalties, settlement amounts, and consumer relief. In the cases involving DOJ, the department generally retained 3 percent of the settlement and penalty amounts paid and deposited this amount in its Three Percent Fund. For example, in 2016, one financial institution agreed to pay $1.2 billion to settle DOJ’s claims brought on behalf of the Federal Housing Administration. DOJ collected the entire $1.2 billion settlement amount from this case and retained $36 million (3 percent of the total collection) and deposited this amount in its Three Percent Fund. DOJ distributed $622.7 million to the Federal Housing Administration and deposited the remaining amount—$541.3 million—in an account in the Treasury General Fund. Chairman Goodlatte, Ranking Member Conyers, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. For further information about this statement, please contact Diana Maurer at (202) 512-8777 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Eric Erdman (Assistant Director), Claudia Becker, Joy Booth, Willie Commons III, Tonnye’ Connor-White, Karen Doran, Chris Hatscher, Rebecca Hendrickson, Paul Hobart, Valerie Kasindi, Beth Kowalewski, Susanna Kuebler, Dawn Locke, Kristy Love, Tarek Mahmassani, Jeremy Manion, Mara McMillen, Adrian Pavia, Geraldine Redican-Bigott, Christina Ritchie, Michelle Serfass, Jack Sheehan, Janet Temko-Blinder, and Jill Verret. Key contributors for the previous work on which this testimony is based are listed in each product. Financial Institutions: Penalty and Settlement Payments for Mortgage- Related Violations in Selected Cases. GAO-17-11R. Washington, D.C.: Nov. 10, 2016. Firearms Data: ATF Did Not Always Comply with the Appropriations Act Restriction and Should Better Adhere to Its Policies. GAO-16-552. Washington, D.C.: June 30, 2016. Human Trafficking: Agencies Have Taken Steps to Assess Prevalence, Address Victim Issues, and Avoid Grant Duplication. GAO-16-555. Washington, D.C.: June 28, 2016. Federal Prison System: Justice Has Used Alternatives to Incarceration, But Could Better Measure Program Outcomes. GAO-16-516. Washington, D.C.: June 23, 2016. Missing Persons and Unidentified Remains: Opportunities May Exist to Share Information More Efficiently. GAO-16-515. Washington, D.C.: June 7, 2016. Prisoner Operations: United States Marshals Service Could Better Estimate Cost Savings and Monitor Efforts to Increase Efficiencies. GAO-16-472. Washington, D.C.: May 23, 2016. Face Recognition Technology: FBI Should Better Ensure Privacy and Accuracy. GAO-16-267. Washington, D.C.: May 16, 2016. Financial Institutions: Fines, Penalties, and Forfeitures for Violations of Financial Crimes and Sanctions Requirements. GAO-16-297. Washington, D.C.: Mar. 22, 2016. Prescription Drugs: More DEA Information about Registrants’ Controlled Substances Roles Could Improve Their Understanding and Help Ensure Access. GAO-15-471. Washington, D.C.: June 25, 2015. Federal Prison System: Justice Could Better Measure Progress Addressing Incarceration Challenges. GAO-15-454. Washington, D.C.: June 19, 2015. Victims of Child Abuse Act: Further Actions Needed to Ensure Timely Use of Grant Funds and Assess Grantee Performance. GAO-15-351.Washington, D.C.: Apr. 29, 2015. Department of Justice: Alternative Sources of Funding Are a Key Source of Budgetary Resources and Could Be Better Managed. GAO-15-48. Washington, D.C.: Feb. 19, 2015. Drug Shortages: Better Management of the Quota Process for Controlled Substances Needed; Coordination between DEA and FDA Should Be Improved. GAO-15-202. Washington, D.C.: Feb. 2, 2015. Whistleblower Protection: Additional Actions Needed to Improve DOJ’s Handling of FBI Retaliation Complaints. GAO-15-112. Washington, D.C.: Jan. 23, 2015. Sex Offender Registration and Notification Act: Additional Outreach and Notification of Tribes about Offenders Who Are Released from Prison Needed. GAO-15-23. Washington, DC: Nov. 18, 2014. Bureau of Prisons: Management of New Prison Activations Can Be Improved. GAO-14-709. Washington, D.C.: Aug. 22, 2014. Justice Grant Programs: DOJ Should Do More to Reduce the Risk of Unnecessary Duplication and Enhance Program Assessment. GAO-12-517. Washington, D.C.: July 12, 2012. Law Enforcement Body Armor: DOJ Could Enhance Grant Management Controls and Better Ensure Consistency in Grant Program Requirements. GAO-12-353. Washington, D.C.: Feb. 15, 2012. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In fiscal year 2016, DOJ's $29 billion budget funded a broad array of national security, law enforcement, and criminal justice system activities. GAO has examined a number of key programs where DOJ has sole responsibility or works with other departments, and recommended actions to improve program efficiency and resource management. This statement summarizes findings and recommendations from recent GAO reports that address DOJ's (1) law enforcement activities, (2) custody and care of federal prisoners and inmates, (3) grant management and administration, and (4) use of alternative sources of funding. This statement is based on prior GAO products issued from February 2012 to November 2016, along with selected updates obtained as of March 2017. For the selected updates on DOJ's progress in implementing GAO recommendations, GAO analyzed information provided by DOJ officials on actions taken and planned. DOJ has not fully addressed most GAO recommendations related to its law enforcement activities. The Department of Justice (DOJ) undertakes a number of activities to enforce the law and defend the interests of the United States. Key findings and recommendations from six recent GAO reports include, among other things, that DOJ should: better adhere to policies on collecting firearms data, assess opportunities to more efficiently share information on missing persons, better ensure the privacy and accuracy of face recognition technology, provide more information to entities that handle controlled substances, and improve the handling of whistleblower complaints. Collectively, these reports resulted in 28 recommendations. As of March 2017, DOJ has fully implemented 5 of these recommendations, begun actions to address 11, has not taken actions for 8, and disagreed with 4 recommendations. DOJ has not fully addressed most GAO recommendations related to the custody and care of federal prisoners and inmates. DOJ is responsible for the custody and care of federal prisoners and inmates, for which the President's Budget requested $8.8 billion for fiscal year 2017. GAO's recent reports highlight areas for continued improvements in DOJ incarceration and offender management, including better assessing key initiatives to address overcrowding and other federal incarceration challenges, better measuring the outcomes of alternatives to incarceration, improving the management of new prison activations, better estimating cost savings for prisoner operations, and improving notification to tribes about registered sex offenders upon release. Since August 2014, GAO has made 17 recommendations to DOJ in five reports related to these issues, and DOJ generally concurred with them. As of March 2017, DOJ has fully implemented 7 of the recommendations, partially implemented 8, and has not taken actions for 2 recommendations. DOJ has implemented most GAO recommendations to improve grant administration and management. DOJ supports a range of activities—including policing and victims' assistance—through grants provided to federal, state, local, and tribal agencies, as well as national, community-based, and non-profit organizations. Congress appropriated $2.4 billion for DOJ grant programs in fiscal year 2016. Four recent GAO reports highlight DOJ's overall grant administration practices, management of specific programs, and efforts to reduce overlap and duplication amongst its grant programs. The four reports include 17 recommendations to DOJ, and the department generally concurred with all of them. As of March 2017, DOJ has fully implemented 15 of the 17 recommendations and partially implemented the remaining two. DOJ has partially implemented GAO recommendations designed to improve management of funds collected through alternative sources. DOJ has the ability to fund programs using money it collects through alternative sources, such as fines, fees, and penalties in addition to its annual appropriations. For example, in 2015, we reported that DOJ collected $4.3 billion from seven alternative sources of funding in 2013. This statement highlights three reports that address DOJ's collection, use, and management of these funds. One of the three reports includes three recommendations, which DOJ has partially implemented. GAO has made several recommendations to DOJ in prior reports to help improve program efficiency and resource management. DOJ generally concurred with most of these recommendations and has implemented or begun taking action to address them. |
Since the enactment of key financial management reforms in the 1990s, the federal government has made significant progress in improving financial management activities and practices. As shown in appendix I, 20 of 24 Chief Financial Officers (CFO) Act agencies were able to attain unqualified audit opinions on their fiscal year 2009 financial statements. In contrast, only 6 CFO Act agencies received unqualified audit opinions for fiscal year 1996. Also, accounting and financial reporting standards have continued to evolve to provide greater transparency and accountability over the federal government’s operations, financial condition, and fiscal outlook. Further, we were able to render unqualified opinions on the 2009, 2008, and 2007 Statements of Social Insurance. Given the importance of social insurance programs like Medicare and Social Security to the federal government’s long-term fiscal outlook, the Statement of Social Insurance is critical to understanding the federal government’s financial condition and fiscal sustainability. Although this progress is commendable, the federal government did not maintain adequate systems or have sufficient, reliable evidence to support certain significant information reported in the U.S. government’s accrual- based consolidated financial statements. Underlying material weaknesses in internal control, which generally have existed for years, contributed to our disclaimer of opinion on the U.S. government’s accrual-based consolidated financial statements for the fiscal years ended 2009 and 2008. Those material weaknesses relate to the federal government’s inability to satisfactorily determine that property, plant, and equipment and inventories and related property, primarily held by the Department of Defense (DOD), were properly reported in the accrual-based consolidated financial statements; reasonably estimate or adequately support amounts reported for certain liabilities, such as environmental and disposal liabilities, or determine whether commitments and contingencies were complete and properly reported; support significant portions of the total net cost of operations, most notably related to DOD, and adequately reconcile disbursement activity at certain federal entities; adequately account for and reconcile intragovernmental activity and balances between federal entities; ensure that the federal government’s accrual-based consolidated financial statements were (1) consistent with the underlying audited entities’ financial statements, (2) properly balanced, and (3) in conformity with U.S. generally accepted accounting principles (GAAP); and identify and either resolve or explain material differences between certain components of the budget deficit reported in Treasury’s records, which are used to prepare the Reconciliation of Net Operating Cost and Unified Budget Deficit and Statement of Changes in Cash Balance from Unified Budget and Other Activities, and related amounts reported in federal entities’ financial statements and underlying financial information and records. In addition to the material weaknesses that contributed to our disclaimer of opinion on the accrual-based consolidated financial statements, we found three other material weaknesses in internal control. These other material weaknesses were the federal government’s inability to determine the full extent to which improper payments occur and reasonably assure that appropriate actions are taken to reduce improper payments, identify and resolve information security control deficiencies and manage information security risks on an ongoing basis, and effectively manage its tax collection activities. The material weaknesses discussed in our audit report continued to (1) hamper the federal government’s ability to reliably report a significant portion of its assets, liabilities, costs, and other related information; (2) affect the federal government’s ability to reliably measure the full cost as well as the financial and nonfinancial performance of certain programs and activities; (3) impair the federal government’s ability to adequately safeguard significant assets and properly record various transactions; and (4) hinder the federal government from having reliable financial information to operate in an efficient and effective manner. Also, many of the CFO Act agencies continue to struggle with financial systems that are not integrated and do not meet the needs of management for reliable, useful, and timely financial information. Often, agencies expend major time, effort, and resources to develop information that their systems should be able to provide on a daily or recurring basis. Three major impediments continued to prevent us from rendering an opinion on the U.S. government’s accrual-based consolidated financial statements: (1) serious financial management problems at DOD that have prevented DOD’s financial statements from being auditable, (2) the federal government’s inability to adequately account for and reconcile intragovernmental activity and balances between federal entities, and (3) the federal government’s ineffective process for preparing the consolidated financial statements. Additional impediments, such as certain entities’ fiscal year 2009 financial statements that, as of the date of our audit report, received disclaimers of opinion or were not audited, also contributed to our inability to render an opinion on the U.S. government’s accrual-based consolidated financial statements. Extensive efforts by DOD and other entity officials and cooperative efforts between entity chief financial officers, Treasury officials, and Office of Management and Budget (OMB) officials will be needed to resolve these obstacles to achieving an opinion on the U.S. government’s accrual-based consolidated financial statements. Given DOD’s significant size and complexity, the resolution of its serious financial management problems is an essential element in further improving financial management governmentwide and ultimately to achieving an opinion on the U.S. government’s consolidated financial statements. Reported weaknesses in DOD’s financial management and other business operations adversely affect the reliability of DOD’s financial data; the economy, efficiency, and effectiveness of its operations; and its ability to produce auditable financial statements. DOD continues to dominate GAO’s list of high-risk programs designated as vulnerable to waste, fraud, abuse, and mismanagement. Eight of the high-risk areas are specific to DOD and include DOD’s overall approach to business transformation, and financial and contract management. To effectively transform its business operations, DOD management must have reliable financial information. Without it, DOD is severely hampered in its ability to make sound budgetary and programmatic decisions, monitor trends, make adjustments to improve performance, reduce operating costs, or maximize the use of resources. DOD continues to take steps toward addressing the department’s long- standing financial management weaknesses. The current DOD Comptroller’s focus on improving the department’s budgetary information and asset accountability will result in a change in emphasis within the Financial Improvement and Audit Readiness (FIAR) Plan, DOD’s plan for improving its financial management. The emphasis is now on two areas— first, strengthening information and processes supporting the department’s Statements of Budgetary Resources; and second, improving the accuracy and reliability of management information pertaining to the department’s mission-critical assets, including weapons systems, real property, and general equipment, and validating improvement through existence and completeness testing. Budgetary and asset-accountability information is widely used by DOD managers at all levels. As such, its reliability is vital to daily operations and management. In this regard, the Marine Corps recently began an audit of its fiscal year 2010 Statement of Budgetary Resources. DOD intends to share with the other services the approaches and lessons learned from the Marine Corps audit. A concentrated focus such as the DOD Comptroller’s emphasis on budget and asset information may increase the department’s ability to show incremental progress toward achieving auditability in the short term. In response to GAO’s recommendations, the department has also put in place a process to improve standardization and comparability of financial management improvement efforts among the military services. The success of this process will depend on top management support and oversight, as well as high-quality planning and effective implementation at all levels. GAO, Financial Management: Achieving Financial Statement Auditability in the Department of Defense, GAO-09-373 (Washington, D.C.: May 6, 2009). develop standardized guidance for financial improvement plans by components of the department; establish a baseline of financial management capabilities and weaknesses at the component level; provide results-oriented metrics for measuring and reporting quantifiable results toward addressing financial management define the oversight roles of the Chief Management Officer (CMO) of the department, the CMOs of the military services, and other appropriate elements of the department to ensure that the FIAR requirements are carried out; assign to appropriate officials and organizations at the component level accountability for carrying out specific elements of the FIAR develop mechanisms to track budgets and expenditures for implementation of the FIAR requirements; and develop a mechanism to conduct audits of the military intelligence programs and agencies and submit the audited financial statements to Congress in a classified manner. We are encouraged by continuing congressional oversight of DOD’s business transformation and financial management improvement efforts and the commitment of DOD’s leaders to implementing sustained improvements in the department’s ability to produce reliable, useful, and timely information for decision making and reporting. We will continue to monitor DOD’s progress in addressing its financial management weaknesses and transforming its business operations. As part of this effort, we are also monitoring DOD’s specific actions to achieve financial statement auditability for its components. Federal entities are unable to adequately account for and reconcile intragovernmental activity and balances. For both fiscal years 2009 and 2008, amounts reported by federal entity trading partners for certain intragovernmental accounts were not in agreement by significant amounts. Although OMB and Treasury require the CFOs of 35 federal entities to reconcile, on a quarterly basis, selected intragovernmental activity and balances with their trading partners, a substantial number of the entities did not adequately perform those reconciliations for fiscal years 2009 and 2008. In addition, these entities are required to report to Treasury, the entity’s inspector general, and GAO on the extent and results of intragovernmental activity and balance-reconciliation efforts as of the end of the fiscal year. A significant number of CFOs were unable to adequately explain or support the material differences with their trading partners. Many cited differing accounting methodologies, accounting errors, and timing differences for their material differences with their trading partners. Some CFOs simply indicated that they were unable to explain the differences with their trading partners with no indication as to when the differences would be resolved. As a result of these circumstances, the federal government’s ability to determine the impact of these differences on the amounts reported in the accrual-based consolidated financial statements is significantly impaired. GAO has identified and reported on numerous intragovernmental activities and balances issues and has made several recommendations to Treasury and OMB to address those issues. Treasury and OMB have generally taken or plan to take actions to address these recommendations. Treasury continues to take steps to help resolve material differences in intragovernmental activity and balances. For example, beginning in the third quarter of 2009, Treasury required entities to perform additional reconciliations related to certain intragovernmental appropriations and transfer activity. Resolving the intragovernmental transactions problem remains a difficult challenge and will require a strong commitment by federal entities to fully implement guidance regarding business rules for intragovernmental transactions issued by OMB and Treasury as well as continued strong leadership by OMB and Treasury. While further progress was demonstrated in fiscal year 2009, the federal government continued to have inadequate systems, controls, and procedures to ensure that the consolidated financial statements are consistent with the underlying audited entity financial statements, properly balanced, and in conformity with GAAP. For example, Treasury’s process did not ensure that the information in the Statement of Operations and Changes in Net Position, Reconciliations of Net Operating Cost and Unified Budget Deficit, and Statements of Changes in Cash Balance from Unified Budget and Other Activities was fully consistent with the underlying information in federal entities’ audited financial statements and other financial data. To make the fiscal years 2009 and 2008 consolidated financial statements balance, Treasury recorded net increases of $17.4 billion and $29.8 billion, respectively, to net operating cost on the Statement of Operations and Changes in Net Position, which it labeled “Unmatched transactions and balances.” An additional net $8 billion and $11 billion of unmatched transactions were recorded in the Statement of Net Cost for fiscal years 2009 and 2008, respectively. Treasury is unable to fully identify and quantify all components of these unreconciled activities. Treasury’s reporting of certain financial information required by GAAP continues to be impaired. Due to certain material weaknesses noted in our audit report—for example, commitments and contingencies related to treaties and other international agreements—Treasury is precluded from determining if additional disclosure is required by GAAP in the consolidated financial statements, and we are precluded from determining whether the omitted information is material. Further, Treasury’s ability to report information in accordance with GAAP will also remain impaired until federal entities, such as DOD, can provide Treasury with complete and reliable information required to be reported in the consolidated financial statements. A detailed discussion of additional control deficiencies regarding the process for preparing the consolidated financial statements can be found on pages 226 through 229 of the Financial Report. During fiscal year 2009, Treasury, in coordination with OMB, continued implementing corrective action plans and made progress in addressing certain internal control deficiencies we have previously reported regarding the process for preparing the consolidated financial statements. Resolving some of these internal control deficiencies will be a difficult challenge and will require a strong commitment from Treasury and OMB as they continue to execute and implement their corrective action plans. While not as significant as the major impediments noted above, financial management problems at the Department of Homeland Security (DHS), the National Aeronautics and Space Administration (NASA), and the Department of State (State) also contributed to the disclaimer of opinion on the federal government’s accrual-based consolidated financial statements for fiscal year 2009. About $48 billion, or about 2 percent, of the federal government’s reported total assets as of September 30, 2009, and approximately $101 billion, or about 3 percent, of the federal government’s reported net cost for fiscal year 2009 relate to these three agencies. According to auditors for DHS, NASA, and State, these agencies continue to have reported material weaknesses in internal control. While the auditors for DHS and NASA noted certain progress in financial reporting, each of the three agency auditors also reported that they were unable to provide opinions on the financial statements because they were not able to obtain sufficient evidential support for amounts presented in certain financial statements. For example, only selected DHS financial statements were subjected to audit, and the auditors stated that DHS was unable to provide sufficient evidence to support certain financial statements balances at the Coast Guard and Transportation Security Administration; auditors for NASA identified issues related to internal control in its property accounting, principally relating to assets capitalized in prior years; and auditors for State reported that the department was unable to provide sufficient support for the amounts presented in the fiscal year 2009 Combined Statement of Budgetary Resources and the property and equipment balance. The auditors for DHS, NASA, and State made recommendations to address control deficiencies at the agencies, and management for these agencies generally expressed commitment to resolve the deficiencies. It will be important that management at each of these agencies remain committed to addressing noted control deficiencies and improving financial reporting. The federal government reported a net operating cost of $1.3 trillion and a unified budget deficit of $1.4 trillion for fiscal year 2009, significantly higher than the amounts in fiscal year 2008. As of September 30, 2009, debt held by the public increased to 53 percent of gross domestic product (GDP). These increases are primarily the result of the effects of the recession and the costs of the federal government’s actions to stabilize the financial markets and to help promote economic recovery. In December 2007, the United States entered what has turned out to be its deepest recession since the end of World War II. Between the fourth quarter of 2007 and the third quarter of 2009, GDP fell by about 2.8 percent. The nation’s unemployment rate rose from 4.9 percent in 2007 to 10.2 percent in October 2009, a level not seen since April 1983. Federal tax revenues automatically decline when GDP and incomes fall, and at the same time, spending on unemployment benefits and other income-support programs automatically increases. As of September 30, 2009, the federal government’s actions to stabilize the financial markets and to promote economic recovery resulted in an increase in reported federal assets of over $500 billion (e.g., Troubled Asset Relief Program (TARP) equity investments, and investments in the Federal National Mortgage Association (Fannie Mae) and the Federal Home Loan Mortgage Corporation (Freddie Mac) and mortgage-backed securities guaranteed by them), which is net of about $80 billion in valuation losses. In addition, the federal government reported incurring additional significant liabilities (e.g., liquidity guarantees to Fannie Mae and Freddie Mac) and related net cost resulting from these actions. Because the valuation of these assets and liabilities is based on assumptions and estimates that are inherently subject to substantial uncertainty arising from the uniqueness of certain transactions and the likelihood of future changes in general economic, regulatory, and market conditions, actual results may be materially different from the reported amounts. In addition, the federal government’s financial condition will be further affected as its actions continue to be implemented in fiscal year 2010 and later. For example, several hundred billion dollars of the total estimated $862 billion cost under the American Recovery and Reinvestment Act of 2009 (Recovery Act) remain to be disbursed. Also, continued implementation of TARP, which was extended through October 3, 2010, is likely to result in additional cost, and the Federal Housing Administration (FHA) mortgage guarantee program could result in additional cost. Consequently, the ultimate cost of the federal government’s actions and their effect on the federal government’s financial condition will not be known for some time. Further, there are risks that the federal government’s financial condition could be affected in the future by other factors, including the following: Several initiatives undertaken in 2009 by the Federal Reserve to stabilize the financial markets have led to a significant change in the reported composition and size of the Federal Reserve’s balance sheet, including the purchase of over $900 billion in mortgage-backed securities guaranteed by Fannie Mae, Freddie Mac, and the Government National Mortgage Association as of the end of 2009. If the Federal Reserve sells these securities at a loss, additional federal government deposits at the Federal Reserve may be needed, future payments of Federal Reserve earnings to the federal government may be reduced, or both. Although the Recovery Act provided some fiscal relief to the states, expected continued state fiscal challenges could place pressure on the federal government to provide further relief to them. Looking ahead, the federal government will need to determine the most expeditious manner in which to bring closure to its financial stabilization initiatives while optimizing its investment returns. In addition to managing these actions, problems in the nation’s financial sector have exposed serious weaknesses in the current U.S. financial regulatory system, which, if not effectively addressed, may cause the system to fail to prevent similar or even worse crises in the future. The current system, which was put into place over the past 150 years, is fragmented and complex and simply has not kept pace with the major financial structures, innovations, and products that emerged during the years leading up to the recent financial crisis. Consequently, meaningful financial regulatory reform is of utmost concern. In crafting and evaluating proposals for financial regulatory reform, it will be important for Congress and others to be mindful of the need to use a framework that facilitates a comprehensive assessment of the relative strengths and weaknesses of each proposal. GAO has previously set forth such a framework that involves nine key elements that are critically important in establishing the most effective and efficient financial regulatory system possible: (1) clearly defined regulatory goals; (2) appropriately comprehensive; (3) systemwide focus; (4) flexible and adaptive; (5) efficient and effective; (6) consistent consumer and investor protection; (7) regulator provided with independence, prominence, authority, and accountability; (8) consistent financial oversight; and (9) minimal taxpayer exposure. The economic downturn and the nature and magnitude of the actions taken to stabilize the financial markets and to promote economic recovery will continue to shape the federal government’s near-term budget and debt outlook. Actions taken to stabilize financial markets—including aid to the automotive industry—increased borrowing and added to the federal debt. The revenue decreases and spending increases enacted in the Recovery Act also added to borrowing and debt. As shown in figure 1, the President’s budget projects debt held by the public growing from 53.0 percent of GDP in fiscal year 2009 to 63.6 percent by the end of fiscal year 2010 and 68.6 percent by the end of fiscal year 2011. While deficits are projected to decrease as federal support for states and the financial sector winds down and the economy recovers, the increased debt and related interest costs will remain. Further, all of this takes place in the context of the current long-term fiscal outlook. The federal government faced large and growing structural deficits—and hence rising debt—before the instability in financial markets and the economic downturn. While the drivers of the long-term fiscal outlook have not changed, the sense of urgency has. As table 1 shows, many of the pressures highlighted in GAO’s simulations, including health care cost growth and the aging population, have already begun to affect the federal budget—in some cases sooner than previously estimated—and the pressures only grow in the coming decade. For example, Social Security cash surpluses have previously served to reduce the unified budget deficit; however, the Congressional Budget Office (CBO) recently estimated that due to current economic conditions the program will run small temporary cash deficits for the next 4 years and then, similar to the Trustees’ estimates, run persistent cash deficits beginning in 2016. The fluctuation and eventual disappearance of the Social Security cash surplus will put additional pressure on the rest of the federal budget. With the passage of time the window to address this challenge narrows. The federal government is on an unsustainable long-term fiscal path driven on the spending side primarily by rising health care costs and known demographic trends. The Statement of Social Insurance, for example, shows that the present value of projected scheduled benefits exceed earmarked revenues for social insurance programs (e.g., Social Security and Medicare) by approximately $46 trillion over the 75-year period. Since GAO’s long-term fiscal simulations include projections of revenue and expenditures for all federal programs, they present a comprehensive analysis of the sustainability of the federal government’s long-term fiscal outlook. Figures 2, 3, and 4 show the results of our most recent long-term fiscal simulations that were issued in March 2010. Absent a change in policy, federal debt held by the public as a share of GDP could exceed the historical high reached in the aftermath of World War II by 2020 (see fig. 2) —10 years sooner than our simulation showed just 2 years ago. As a result, the administration and Congress will need to apply the same level of intensity to the nation’s long-term fiscal challenge as they have to the recent economic and financial market issues. Although the economy is still fragile, there is wide agreement on the need to begin to change the long-term fiscal path as soon as possible without slowing the recovery because the magnitude of the changes required grows with time. Congress recently enacted a return to statutory PAYGO—a budgetary control requiring that the aggregate impact of increases in mandatory spending or reductions in revenue generally be offset. Although this can prevent further deterioration of the fiscal position, it does not deal with the existing imbalance. In February, the President established a commission to identify policies to change the fiscal path and stabilize the debt-to-GDP ratio. One quantitative measure of the long-term fiscal challenge is called the “fiscal gap.” The fiscal gap is the amount of spending reductions or tax increases, over a certain time period such as 75 years, that would be needed to keep debt as a share of GDP at or below today’s ratio. Another way to say this is that the fiscal gap is the amount of change needed to prevent the kind of debt explosion implicit in figure 2. The fiscal gap can be expressed as a share of the economy or in present value dollars. Under GAO’s Alternative simulation, closing the fiscal gap would require spending cuts or tax increases, or some combination of the two averaging 9.0 percent of the entire economy over the next 75 years, or about $76.4 trillion in present value terms. To put this in perspective, closing the gap solely through revenue increases would require annual increases in federal tax revenues of about 50 percent on average, or to do it solely through spending reductions would require annual reductions in federal program spending (i.e., in all spending except for interest on the debt held by the public, which cannot be directly controlled) of about 34 percent on average over the entire 75-year period. Policymakers could phase in policy changes so that tax increases or spending cuts or both would grow over time allowing time for the economy to recover and for people to adjust to the changes. However, the longer action to deal with the long-term outlook is delayed, the greater the risk that the eventual changes will be disruptive and destabilizing. Comprehensive long-term fiscal projections will be required in the federal government’s financial statements beginning in fiscal year 2010, under a new accounting standard. Such reporting will include information about the long-term fiscal condition of the federal government and annual changes therein, and will expand upon the information currently provided in the Management’s Discussion and Analysis section of the Financial Report. It is not only the federal government that faces a long-term fiscal challenge. Figure 4 shows the federal and combined federal, state, and local surpluses and deficits as a share of GDP from our most recent simulation results. In closing, even though progress has been made in improving federal financial management activities and practices, much work remains given the federal government’s near-and long-term fiscal challenges and the need for Congress, the administration, and federal managers to have reliable, useful, and timely financial and performance information to effectively meet these challenges. The need for such information and transparency in financial reporting is clearly evident. The recession and the federal government’s unprecedented actions intended to stabilize the financial markets and to promote economic recovery have significantly affected the federal government’s financial condition, especially with regard to certain of its investments and increases in its liabilities and net operating cost. Importantly, while such increases are reported in the U.S. government’s consolidated financial statements for fiscal year 2009, the valuation of certain assets and liabilities is based on assumptions and estimates that are inherently subject to substantial uncertainty arising from the uniqueness of certain transactions and the likelihood of future changes in general economic, regulatory, and market conditions. Going forward, a great amount of attention will need to be devoted to ensuring (1) that sufficient internal controls and transparency are established and maintained for all financial stabilization and economic recovery initiatives; and (2) that all related financial transactions are reported on time, accurately, and completely. Further, sound decisions on the current and future direction of all vital federal government programs and policies are more difficult without reliable, useful, and timely financial and performance information. In this regard, for DOD, the challenges are many. We are encouraged by DOD’s efforts toward addressing its long-standing financial management weaknesses and its efforts to achieve auditability. Consistent and diligent top management oversight toward achieving financial management capabilities, including audit readiness, will be needed. Moreover, the civilian CFO Act agencies must continue to strive toward routinely producing not only annual financial statements that can pass the scrutiny of a financial audit, but also quarterly financial statements and other meaningful financial and performance data to help guide decision makers on a day-to-day basis. Federal entities need to improve the government’s financial management systems to achieve this goal. Moreover, of utmost concern are the federal government’s long-term fiscal challenges that result from large and growing structural deficits that are driven on the spending side primarily by rising health care costs and known demographic trends. This unsustainable path must be addressed soon by policymakers. Finally, I want to emphasize the value of sustained congressional interest in these issues, as demonstrated by this Subcommittee’s leadership. It will be key that, going forward, the appropriations, budget, authorizing, and oversight committees hold the top leadership of federal entities accountable for resolving the remaining problems and that they support improvement efforts. Madam Chairwoman and Ranking Member Bilbray, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have at this time. For further information regarding this testimony, please contact Jeanette M. Franzel, Managing Director, and Gary T. Engel, Director, Financial Management and Assurance, at (202) 512-2600, as well as Susan J. Irving, Director, Federal Budget Analysis, Strategic Issues, at (202) 512-6806. Key contributions to this testimony were also made by staff on the Consolidated Financial Statement audit team. | GAO annually audits the consolidated financial statements of the U.S. government (CFS). Congress and the President need reliable, useful, and timely financial and performance information to make sound decisions and conduct effective oversight of federal government programs and policies. The federal government began preparing the CFS 13 years ago. Over the years, certain material weaknesses in internal control over financial reporting have prevented GAO from expressing an opinion on the accrual-based consolidated financial statements. Unless these weaknesses are adequately addressed, they will, among other things, continue to (1) hamper the federal government's ability to reliably report a significant portion of its assets, liabilities, costs, and other related information; and (2) affect the federal government's ability to reliably measure the full cost as well as the financial and nonfinancial performance of certain programs and activities. This testimony presents the results of GAO's audit of the CFS for fiscal year 2009 and discusses certain of the federal government's significant near- and long-term fiscal challenges. For the third consecutive year, GAO rendered an unqualified opinion on the Statement of Social Insurance (SOSI). Given the importance of social insurance programs like Medicare and Social Security to the federal government's long-term fiscal outlook, the SOSI is critical to understanding the federal government's financial condition and fiscal sustainability. Three major impediments continued to prevent GAO from rendering an opinion on the federal government's consolidated financial statements other than the SOSI: (1) serious financial management problems at the Department of Defense, (2) federal entities' inability to adequately account for and reconcile intragovernmental activity and balances, and (3) an ineffective process for preparing the consolidated financial statements. In addition to the material weaknesses underlying these major impediments, GAO noted material weaknesses involving improper payments estimated to be at least $98 billion for fiscal year 2009, information security, and tax collection activities. The recession and the federal government's unprecedented actions intended to stabilize the financial markets and to promote economic recovery have significantly affected the federal government's financial condition. The resulting substantial investments and increases in liabilities, net operating cost, the unified budget deficit, and debt held by the public are reported in the U.S. government's consolidated financial statements for fiscal year 2009. The ultimate cost of these actions and their impact on the federal government's financial condition will not be known for some time in part because the valuation of these assets and liabilities is based on assumptions and estimates that are inherently uncertain. Looking ahead, the federal government will need to determine the most expeditious manner in which to bring closure to its financial stabilization initiatives while optimizing its investment returns. In addition, problems in the nation's financial sector have exposed serious weaknesses in the current U.S. financial regulatory system. If those weaknesses are not adequately addressed, we could see similar or even worse crises in the future. Consequently, meaningful financial regulatory reform is of utmost concern. The federal government faces a long-term fiscal challenge resulting from large and growing structural deficits that are driven on the spending side primarily by rising health care costs and known demographic trends. GAO prepares long-term fiscal simulations that include projections of revenue and expenditures for all federal programs. As a result, these simulations present a comprehensive analysis of the sustainability of the federal government's long-term fiscal outlook. Many of the pressures highlighted in GAO's simulations, including health care cost growth and the aging population, have already begun to affect the federal budget--in some cases sooner than previously estimated--and the pressures only grow in the coming decade. For example, Social Security cash surpluses have previously served to reduce the unified budget deficit; however, the Congressional Budget Office recently estimated that due to current economic conditions the program will run small temporary cash deficits for the next 4 years and then, similar to the Trustees' estimates, run persistent cash deficits beginning in 2016. The fluctuation and eventual disappearance of the Social Security cash surplus will put additional pressure on the rest of the federal budget. |
Retirement income in the United States includes Social Security benefits, asset income, pension benefits, and earnings. Over the last 40 years, receipt of Social Security has become almost universal while receipt of asset income has increased modestly, receipt of private pensions has tripled, and receipt of government pensions has increased by 50 percent. However, a smaller proportion of aged households received earnings in 2000 than in 1962. (See fig. 1.) All of these components of retirement income have been affected by the major regulatory, labor market, and demographic changes that have taken place in the last 40 years. Legislative changes have expanded the pension and personal saving options available to workers. The Employee Retirement Income Security Act (ERISA) of 1974 provided certain minimum standards and broad new protections of employee benefits plans, including provisions for individual retirement accounts (IRA). Subsequent legislation revised some provisions of ERISA, further expanding the possibilities for workers to have access to pension income in retirement and established new types of employer- sponsored pension plans, such as 401(k) plans. Legislative changes have also focused on the financing problems of Social Security. In the late 1970s and early 1980s, legislative action regarding Social Security attempted to solve this financing problem by raising taxes, curtailing future benefits, raising the retirement age, and trying to increase work incentives. However, the financing of future Social Security benefits is still an issue, and further action will need to be taken to either increase the program’s revenues, decrease its expenditures, or both. The labor market conditions facing young workers today differ significantly from those facing earlier generations of workers. Changes in earnings, women’s labor force participation, and pension coverage over the last 40 years have altered the context within which workers save for retirement. Real earnings increased throughout the 1960s, slowed considerably in the 1970s, remained relatively stagnant during the 1980s and much of the 1990s, and may have started to rise in the late 1990s. For some groups of workers, such as production or nonsupervisory workers, average weekly earnings adjusted for inflation declined over most of the time period following the early 1970s. (See fig. 2.) For young workers facing stagnant or declining real earnings, saving for retirement might have become more difficult than it was for those who entered the labor market when real earnings were growing. In addition, over the last 40 years, more women have entered the labor force. They entered regardless of their marital status—the labor force participation rates of married women, for example, increased from 32 percent in 1960 to 61 percent in 1999. (See fig. 3.) This means a larger share of women in younger cohorts is working and likely to qualify for Social Security and pensions based on their own earnings. This also means an increase in the share of married couple households that have two earners, which could increase the potential for household retirement saving. The composition of pension coverage also changed during this period. The estimated share of private wage and salary workers participating in a DB plan as their primary pension plan declined from 39 percent in 1975 to 21 percent in 1997, while the share participating in a DC plan as their primary pension plan increased from 6 percent to 25 percent. (See fig. 4.) The decline in DB pension plan coverage and the increase in DC pension plan coverage over the past 3 decades means that more of the responsibility for retirement saving has shifted to individual workers from employers. Demographic changes over the last 40 years have also altered the circumstances of workers as they save for retirement. Educational attainment, for example, has increased over time. In 1960, only about 8 percent of the population 25 years of age and older had a college degree. By 1999, 25 percent of the population 25 years or older were college graduates. (See fig. 5.) The increase in educational attainment over time could facilitate increased saving among those younger workers who attain higher education. The composition of households has also changed over this period with the share of households headed by a married couple decreasing. In 1960, 74 percent of all households were comprised of married couple families. By 1999 this had fallen to 53 percent. At the same time, the percentage of one-person households increased from 13 percent to 26 percent of all households. (See fig. 6.) Median incomes are typically lower for families headed by a single female or for single person households. In addition, life expectancy has increased across the generations. The greater life expectancy of the younger generations could mean that the retirement income of the Baby Boom and Generation X would need to support a larger number of years. The retirement security of today’s workers will also be affected by changes in the cost and provision of health care. Over the last 40 years, the provision of health benefits has become more expensive for employers as generous benefits have combined with higher utilization rates, a growing elderly population, and a rapidly increasing cost of service. In response to these increased costs, many employers have begun to limit the health benefits provided, either by terminating their plans, restricting benefits, or reducing their share of the premium. As a result, future retirees are likely to pay more of the costs of their health care. Consequently, today’s workers might have to work longer, save more, or both, to ensure sufficient access to health benefits. In addition to paying more for privately sponsored health benefits, today’s current workers might also pay more in retirement for Medicare. Medicare costs are continuing to rise with the result that either benefits will have to be reduced or monthly premiums will have to be increased. Given all these demographic changes, as well as regulatory and economic changes, analysis of retirement income is increasingly dependent on good estimates, which in turn require adequate data. In a recent report on needed improvements in retirement income data, we identified data improvements that experts say are a priority for the study of retirement income. In particular, experts cited data from employers on employee benefits, as well as linkages between individual and household surveys and administrative data, as being helpful for estimating future retirement income. Baby Boom and Generation X households headed by individuals aged 25 to 34 have greater accumulated assets, adjusted for inflation, than current retirees had when they were the same age but they also have more debt. The large increase in assets between current retirees—the Pre-Baby Boom generation—and the Baby Boom is due mainly to increases in home equity and increases in the rate of home ownership. The modest increase in assets between the Baby Boom and Generation X can be accounted for in large part by the increase in the ownership and value of DC retirement accounts, because SCF data do not reflect the value of benefits from DB pension plans. While the percentage of households with debt has changed very little across the generations, the real total debt levels have more than doubled between current retirees and Generation X workers. Yet, for most young Baby Boom and Generation X households, assets exceed debts and the net worth of these households with positive net worth is 60 percent greater than that of current retirees at similar ages. However, particularly for Generation X, greater life expectancy may require more assets to cover more years in retirement and greater assets may also be required to support higher standards of living. Within each generation, the distribution of net worth across households is affected by economic and demographic characteristics. Specifically, those who do not own their own home, are less educated, or are single, have less in net worth. For households headed by a 25- to 34-year old, both the median value of total assets (in 1998 dollars) and the percentage of households with assets increased across the generations. (See fig. 7.) The median value of total assets for the Baby Boom and Generation X is more than 50 percent greater than that for the Pre-Baby Boom generation. While our analysis indicates that asset levels increase across the generations, it does not take into account the expectation of rising standards of living. Generation X, for example, could have greater assets than those of previous generations and still feel that these assets are insufficient for the lifestyle they want or expect. For households headed by a 25- to 34-year old, the increase in assets across the generations can be attributed mainly to housing and DC retirement accounts. (See fig. 7.) As we have noted, our measure of assets does not include the value of benefits from DB pension plans and, to the extent that a larger percentage of the Pre-Baby Boom and the Baby Boom than Generation X is covered by DB plans, will underestimate the true value of assets for the Pre-Baby Boom and the Baby Boom relative to Generation X. The large increase in total asset accumulation between the Pre-Baby Boom and the Baby Boom is largely due to increases in home equity and increases in the rate of home ownership. The median value of housing assets increased from $72,890 for the Pre-Baby Boom to $78,583 for the Baby Boom, while the percentage of households owning their own home increased from 39 to 45 percent. The modest increase in total asset accumulation between the Baby Boom and Generation X can be accounted for in large part by the increase in the ownership and value of retirement accounts. The median value of DC retirement accounts increased from $2,947 for the Baby Boom to $8,003 for Generation X, while the percentage of households with retirement accounts increased from 20 percent to 46 percent. The increased percentage of households with retirement accounts reflects changes in the types of pension plans offered by employers. Between 1983 and 1997, the percentage of workers covered by primary DC pension plans, under which the worker has a retirement account, increased from 11 percent to 25 percent while the percentage of workers covered by DB pension plans declined from 35 percent to 21 percent. Financial and nonfinancial assets contribute only modestly to the increase in total assets across the generations. (See fig. 8.) Financial assets include savings accounts, mutual funds, and stocks and bonds while nonfinancial assets include vehicles, business interests, and nonresidential real estate. The median value of financial assets varies between less than $2,000 for the Pre-Baby Boom generation and $4,000 for the Baby Boom. A greater percentage of households in the younger cohorts have financial assets than was the case for current retirees. The median value of nonfinancial assets is greater than that for financial assets in each of the generations and has increased across the cohorts. While the ownership of nonfinancial assets increased for the Baby Boom, relative to current retirees, it decreased for Generation X relative to both the Baby Boom and current retirees. The degree to which the younger cohorts will be able to add to the assets that we observe when they are ages 25 to 34 will be affected by a number of demographic and economic factors. Individuals have control over some of these factors. For example, they can determine how much education they receive, how long they work, whether both spouses in a couple work, how much they save while they are working, and whether they stay married or get divorced. On the other hand, individuals have no direct control over the rate of growth of real wages, the performance of the overall economy, the rate of return on financial assets, changes in housing prices, shifts in pension coverage and generosity of benefits, the state of the health care system, changes in life expectancy, and the resolution to the funding shortfall for Social Security and Medicare. One of the resolutions to the funding shortfall for both Social Security and Medicare is to increase the payroll tax that employees and employers pay. An increase in the payroll tax, of course, reduces the amount of an individual’s disposable income available to both consume and save. On the other hand, if individuals expected Social Security benefits to be reduced, they might increase their personal saving in order to offset this reduction in benefits. Likewise, increases in life expectancy may also require increased saving in order to provide for a greater number of years in retirement or might induce people to work longer. For households headed by a 25- to 34-year old, overall debt levels increase across the generations. (See fig. 9.) The median level of debt for the Baby Boom is 38 percent greater than that for the Pre-Baby Boom generation while Generation X’s median level of debt is 146 percent greater than that of the Pre-Baby Boom generation and 78 percent greater than that of the Baby Boom. The percentage of households with debt changed very little, however, remaining at roughly 83-84 percent across the generations. Thus, those households that go into debt are going into debt more deeply with each new generation. The increase in debt levels between the Baby Boom and Generation X was due largely to increases in housing debt. The median value of housing debt increased between the Baby Boom and Generation X by 61 percent. The percentage of households with housing debt changed very little between these two generations, however, remaining at roughly 40 percent. The amount of debt carried by a household will affect the value of its net worth. For households headed by a 25- to 34-year old, the percentage of households with positive net worth and the median value of positive net worth increased between the Pre-Baby Boom and Generation X; however, the median value of negative net worth is also much higher for Generation X. (See fig. 10.) The median value of net worth for households with positive net worth increased by 60 percent between the Pre-Baby Boom and the two younger generations. The percentage of households with negative net worth is smaller for the two younger generations than for current retirees when they were young. However, the median value of net worth for households with negative net worth is about four times larger for Generation X than for the Baby Boom or the Pre-Baby Boom. The younger generations in general have experienced an increase in net worth relative to current retirees at the same age, with the Baby Boom having a median net worth three times that of the older generation and Generation X having a median net worth two and a half times that of current retirees. However, there are some groups within these cohorts that have not benefited as much as others. (See table 1.) For example, the median net worth for Baby Boom and Generation X homeowners is between $17,000 and $35,000 greater than that for Pre-Baby Boom homeowners; for nonhomeowners, net worth between the older and younger cohorts differs by only $2,300 to $3,700. Median net worth has increased across the cohorts for all education levels, but much less so for those without a high school degree. Both single headed households and households headed by a married couple have seen increases in net worth; however, the increases have been much smaller for single headed households. These trends have increased the disparity in net worth within the younger generations compared to the Pre-Baby Boom. Another measure of the well-being of different generations is the ratio of net worth, or wealth, to income. Median ratios of wealth to income for households headed by a 25- to 34-year old are presented in table 2. The Baby Boom and Generation X have higher wealth-to-income ratios than current retirees had at similar ages. This suggests that households in the younger generations have been able to accumulate more wealth than was the case for current retirees. The ratios also reflect the differences across demographic groups within generations. Within each generation, ratios of wealth to income are higher for the well-educated, the married, and homeowners. In our simulations, Generation X and the Baby Boom have similar levels of retirement income in real terms (adjusted for inflation). Social Security benefit levels for Generation X and the Baby Boom will depend on how the Social Security funding shortfall is resolved. The shift to greater DC pension coverage does not have much effect on the pension income of Generation X relative to the Baby Boom. However, replacement rates for Generation X are estimated to be lower than for the Baby Boom under each scenario we considered, suggesting retirement income for Generation X may not keep up with the rising standard of living, absent increases in other sources of retirement income, or increases in rates of return. Our simulations suggest that Generation X will have real retirement income that is similar or somewhat higher than the Baby Boom, depending on how the Social Security funding shortfall is resolved. If the shortfall is resolved by increasing the program’s revenues to maintain scheduled benefits, then Generation X is estimated to have somewhat higher real retirement income at age 62 than the Baby Boom generation. (See table 3.) Because our simulations assume that real earnings increase over time, Generation X would have higher Social Security benefits than the Baby Boom. However, if the shortfall is resolved through gradual benefit reductions over time, then Generation X is estimated to have real retirement income levels at age 62 that are more similar to those of the Baby Boom. (See table 4.) Because the benefit reductions increase over time, they would have more impact on Generation X than on the Baby Boom, leading to slightly lower Social Security benefits for Generation X relative to the Baby Boom. Changes to the Social Security system could also affect other forms of retirement income, especially those not considered here. If program revenues were increased by raising Social Security payroll taxes, then individuals would have less disposable income to save for retirement. This could take the form of decreases in personal saving or lower contributions to DC pension plans. Instead, if general revenues were used, the funding of other programs could be affected, which could lower some individuals’ income from other income support programs, such as Supplemental Security Income (SSI). The timing and implementation of the changes to the Social Security system are also relevant since action taken later rather than sooner would necessitate larger tax increases or benefit reductions and the impact on Generation X could be even greater. Generation X and the Baby Boom are estimated to have similar levels of pension income when our simulations assume that the rate of DB and DC pension coverage is constant over time. (See table 4.) DC account balances are annuitized at retirement to facilitate comparisons. While Generation X’s simulated higher earnings might have suggested higher pension income as well, they may have been too young to completely benefit from the strong stock market of the 1990s. The assumption that the rate of pension coverage is constant over time has not been the experience of private pensions in the United States over the last 25 years. DB coverage has declined, and DC coverage has increased. Generation X and the Baby Boom are estimated to have similar levels of pension income even when our simulations assume Generation X only has access to DC pension plans. (See table 5.) While assuming that all pension coverage will shift to DC plans represents the extreme case, it does provide a bound to our estimates. These simulations provide some insight into the impact that the continuing shift from DB to DC pension coverage might have on retirement income for Generation X, since the final outcome of this shift is uncertain. In our simulations, Generation X has a lower earnings replacement rate than the Baby Boom (see table 6) even though the Baby Boom and Generation X are estimated to have similar levels of retirement income. Our assumption of increasing earnings over time leads to Generation X having a lower replacement rate. The largest difference between the cohorts, in terms of replacement rates, occurs under the Social Security benefit reduction scenario since benefit levels are falling more for Generation X while earnings are unchanged. While the shift in pension coverage raises the level of retirement income for Generation X, it does not change the replacement rate. The earnings replacement rate is an indicator of how well individuals are doing at maintaining their pre-retirement standard of living. While our estimated replacement rates do not cover all individuals in each generation or include all forms of retirement income, they still might indicate a decline in the standard of living during retirement for Generation X. However, this does not take into account that retirement income may increase because of behavioral changes or other external factors. Since Generation X is still relatively young, it is possible that some members of this cohort may change their behavior and save more or work longer. Also, variations in rates of return could be greater than expected, causing some individuals in our simulations to experience higher asset returns. Any of these factors could raise retirement income and, possibly, Generation X’s replacement rate. If this were to occur, the difference in replacement rates between the Baby Boom and Generation X could be smaller than we estimate. Our simulations suggest that retirement income will vary significantly within both Generation X and the Baby Boom. Retirement income will also vary by demographic group, with income being lower for the less educated and single women. Simulated retirement income will vary widely across households within both Generation X and the Baby Boom. For example, if married households in Generation X were arranged from lowest to highest in terms of their retirement incomes at age 62, the top 20 percent would receive over 40 percent of all retirement income while the bottom 20 percent would receive less than 7 percent. (See fig. 11.) The disparity between the top 20 percent and bottom 20 percent is even larger for single persons. Because retirement income is closely linked to earnings, which are known to vary significantly, this degree of variation in estimated retirement income is not surprising. When examining the sources of retirement income, simulated pension benefits are less evenly distributed than simulated Social Security benefits. Married couples in the top 20 percent in terms of pension benefits receive over 58 percent of all pension benefits while those in the bottom 20 percent receive no benefits at all, as shown for Generation X in figure 12. In comparison, married couples in the top 20 percent in terms of Social Security benefits receive about 31 percent of all Social Security benefits, while those in the bottom 20 percent receive about 10 percent. Pension benefits are less evenly distributed for at least two reasons. First, by design, the Social Security benefit formula is more generous toward low-income and disabled workers, in contrast to pensions, which tend to play a larger role in the retirement income of higher earning workers. Second, some workers have no pension coverage while nearly all workers are covered by Social Security. In our simulations, 20 percent of married households and 33 percent of single individuals in Generation X receive no pension benefits. The median retirement income for married households where at least one member has a pension is almost twice as large as the median for married households where neither member has a pension. (See fig. 13.) The percentage difference between those with pensions and without pensions is even larger for single persons. Simulated retirement income varies by educational attainment, marital status, and gender. Simulated retirement income is lower for those with less education, as shown for Generation X in figure 14. The median retirement income for married high school dropouts is about 43 percent less than the median for married college graduates. The percentage difference between single high school dropouts and single college graduates is even larger. The less educated have lower Social Security and pension benefits due to lower lifetime earnings and lower rates of pension coverage. In our simulations for Generation X, 66 percent of married couples without high school degrees receive pension benefits as opposed to 87 percent of married college graduates. Simulated retirement income also varies by marital status with divorced and never married individuals having lower retirement incomes than widows and married couples. (See table 7.) Median retirement incomes for never married persons and divorced persons are about 23 percent less and 32 percent less, respectively, compared to that of widows. Median household retirement incomes for never married persons and divorced persons are about 58 percent less and 63 percent less, respectively, compared to that of married couples. Retirement incomes are less for never married persons and divorced persons, even if one compares retirement income per household member. How widows and married couples compare in terms of retirement income depends on the measure of income used. Widows have lower median retirement income than married couples using household income as the measure, but greater median retirement income using income per household member as the measure. (See table 7.) Whether or not married couples have a higher standard of living than widows depends on how much they save by sharing their expenses. Simulated retirement income is lower for single women than for single men, as shown for Generation X in figure 15. The median retirement income for single women is about 31 percent less than the median for single men. Again this is due to lower lifetime earnings and a lower rate of pension coverage. Sixty-three percent of single women in Generation X receive pension benefits as opposed to 74 percent of single men. Variation in simulated retirement income suggests some members of both generations may be at greater risk of retiring with insufficient resources. Assessing the sufficiency of simulated retirement income is difficult because we do not simulate assets, earnings in retirement, and SSI and other public assistance programs. However, retirees who earned low earnings over their working years may not have substantial assets or earnings in retirement, and SSI provides only a very modest level of support and is restricted to the poorest of retirees. Our analysis of wealth at ages 25 to 34 and our simulations of Social Security and pension benefits at age 62 suggest that both the Baby Boom and Generation X are likely to have similar levels of retirement income in real terms, but that level may not support Generation X’s future living standards. Our analysis also indicates that across the generations, similar subgroups of the population are most vulnerable in retirement. The levels of retirement income that Baby Boom and Generation X workers will actually receive depend in part upon their own behavior, such as how long they work or how much they save, and in part upon factors they cannot control, such as the performance of the overall economy, the rate of return on financial investments, and changes in Social Security and health care financing. Individuals’ behavior, and future economic events, may vary significantly from the assumptions underlying our models, especially for those workers who still have many years to work before retirement. In addition, estimates of future retirement income depend on adequate data on individuals’ earnings, wealth, and pensions, not all of which are easily captured in existing data sets. Further, rising expectations about consumption, leisure and health care in retirement (and the costs of meeting these expectations) could require higher replacement rates for Generation X than for the Baby Boom in order to maintain the standards of living they achieved while working. Government policy can potentially have an important effect on individuals’ retirement income. Policies that encourage individuals to acquire more education and training, to work longer and to save more can help ensure higher retirement incomes in the future. Also, any reform that policymakers undertake with regard to the Social Security program or health care financing will have repercussions for the retirement income of Generation X and the younger half of the Baby Boom. Our work suggests the importance of all these policy actions reflecting a coordinated approach to future retirement income, and that they be made soon enough so the affected individuals will have adequate time to adjust their work and saving behavior accordingly. Finally, the continued vulnerability of certain segments of the population to inadequate resources at retirement suggests that successful retirement income policies would take potential impacts on these groups into consideration. We provided a draft of this report to SSA, Labor, and Treasury. All three provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to the Social Security Administration, the Department of Labor, and the Department of the Treasury. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report, please contact Barbara Bovbjerg at (202) 512-7215. See appendix III for other contacts and staff acknowledgments. Barbara D. Bovbjerg Director, Education, Workforce and Income Security Issues. To gain an understanding of what today’s workers might expect to receive in terms of retirement income, we compared the wealth of current workers with that of current retirees, at similar points in their lives, and estimated the pension and Social Security benefits that the Baby Boom and Generation X might receive. To analyze personal wealth we used the Survey of Consumer Finances, a survey of U.S. households sponsored by the Board of Governors of the Federal Reserve System. To analyze how workers from the Baby Boom and Generation X compare in terms of the retirement income they can expect to receive and the likely distribution across workers within the Baby Boom and Generation X, we simulated expected retirement income at age 62. To analyze personal wealth, we used the Survey of Consumer Finances (SCF), a triennial survey of U.S. households sponsored by the Board of Governors of the Federal Reserve System with the cooperation of the U.S. Department of the Treasury. The SCF provides detailed information on U.S. households’ balance sheets and their use of financial services, as well as on their pensions, labor force participation, and demographic characteristics as of the time of the interview. The SCF also collects information on households’ total cash income, before taxes, for the calendar year preceding the survey. Because the survey is expected to provide reliable information both on assets that are fairly common—such as houses—as well as on assets that are owned by relatively few—such as closely held businesses—the SCF uses a sample design that includes a standard, geographically based random sample and a special over sample of relatively wealthy families. Weights are used to combine information from the two samples to make estimates for the full population. The 1962 SCF was conducted by the Census Bureau and surveyed 3,551 households. The 1983 SCF was conducted by the Survey Research Center of the University of Michigan and surveyed 3,824 households. The 1998 SCF was conducted by the National Opinion Research Center at the University of Chicago and surveyed 4,309 households. Using the SCF, we analyzed how marital status, education, and homeownership are related to the wealth of households headed by a 25- to 34-year old. Using the 1962, 1983, and 1998 SCFs, we examined the ownership and level of household savings for current retirees (born between 1925 and 1945), the Baby Boom (born between 1946 and 1964), and Generation X (born between 1965 and 1976) when each generation was 25 to 34 years old. We selected this age group because this is the only age group for which we have data on personal wealth in each of the three generations. Our measure of personal wealth includes tax favored retirement saving, such as individual retirement accounts (IRA) and 401(k)s and other thrift type plans, as well as savings that are not specifically dedicated to retirement but may enhance retirement income, such as liquid financial assets (checking accounts, savings accounts, money market deposit accounts, and money market mutual funds), other financial assets (certificates of deposit, mutual funds, stocks, and bonds), housing assets, and nonhousing assets (nonresidential real estate, business interests, and vehicles). We also looked at housing liabilities and nonhousing liabilities (credit cards, installment loans, and other debts). For each component of personal wealth, we calculated the percentage of households owning that type of wealth as well as the median value. We looked separately at assets and debt and then combined them to calculate individual net worth. For studies in which the focus is on saving or net worth, the SCF is preferable to other household income surveys, such as the Panel Study of Income Dynamics (PSID) or the Survey of Income and Program Participation (SIPP). The SCF has more detailed information about wealth holding, better distributional characteristics, less item nonresponse, and fewer imputed variables than the PSID or the SIPP. However, the SCF, like all surveys, is subject to sampling errors, reporting errors, and nonresponse errors. Sampling errors result from the fact that survey estimates are based on a sample of the population rather than on a complete census of the population. Reporting errors arise because respondents may not understand what is wanted, may not know the information requested, or may be reluctant to reveal their actual income or wealth. Nonresponse errors arise when the family selected for participation is not available to be interviewed, either because they refuse to participate or cannot be contacted. Further, the sample sizes for the SCF are relatively small compared with surveys such as the Current Population Survey. For our analysis, we are concerned with the fact that small samples are vulnerable to bias from observations not representative of the population as a whole. For all of these reasons, our numbers should be interpreted with some caution. To analyze how workers from the Baby Boom and Generation X compare in terms of the retirement income they can expect to receive and the likely distribution across workers within the Baby Boom and Generation X, we simulated expected retirement income at age 62. Our measure of retirement income consists of pension income, Social Security benefits, and spouse’s earnings. It does not include personal savings, earnings in retirement, health benefits, or income from other income support programs (e.g., Supplemental Security Income). For our simulations, we used the Social Security and Accounts Simulator (SSASIM), Genuine Microsimulation of Social Security and Accounts (GEMINI), and Pension Simulator (PENSIM) simulation models. GEMINI estimated Social Security benefits and PENSIM estimated pension income from defined benefit and defined contribution plans for the 1955 birth cohort (Baby Boom) and the 1970 birth cohort (Generation X) and their spouses. Retirement income and its components were discounted to 2001 dollars, allowing us to make comparisons across cohorts in terms of the level of retirement income. However, these comparisons do not give an indication of standards of living in retirement. To make this comparison, we looked at the earnings replacement rate, calculated as retirement income at age 62 divided by earnings at age 61 for retired workers who worked at age 61 and whose spouse, if married, was the same age. To examine the distribution of retirement income within both generations, we calculated the degree of variation by arranging households by retirement income and finding the proportion of that income received by each quintile. To compare groups by demographics, we calculated median retirement income by educational attainment, gender, and marital status. Due to the difference in household size, we performed most of the above calculations separately for married couples and singles—those widowed, divorced, or never married—at age 62. When examining retirement income by marital status we calculated both household income and income per household member. SSASIM is a Social Security policy simulation model developed by the Policy Simulation Group (PSG). The initial version of the model was developed under a series of contracts from the Social Security Administration as part of the 1994-96 Advisory Council on Social Security. SSASIM consists of two models, a macro model of aggregate program finances, and an embedded micro model of selected cohort individuals. In addition to current law policy, the model can simulate a variety of policy reforms, from incremental changes to broader structural reforms that would introduce individual accounts into the broader Social Security system. GEMINI is a policy microsimulation model also developed by the PSG. GEMINI is useful for analyzing the lifetime implications of Social Security policies for a large sample of people born in the same year and can simulate different reform features for their effects on the level and distribution of benefits. GEMINI uses as input birth cohort samples generated by PENSIM so as to represent the demographic and economic characteristics of historical birth cohorts. Also, GEMINI incorporates the same kind of Old Age, Survivor and Disability Insurance (OASDI) program logic as used in the micro model of SSASIM, with almost all assumption and policy parameters read from a SSASIM input database. GEMINI produces output files that contain detailed information about the life events and annual OASDI program experience of each individual in the cohort sample. For our report, the PSG produced the GEMINI output files using the same 1955 and 1970 birth cohorts used in PENSIM for both a scheduled and funded Social Security scenario (see following paragraphs for more details.) The PENSIM and GEMINI output files were then merged, yielding an output file containing yearly Social Security benefits, pension income, and spouse’s earnings from age 62 until death for each member of the cohort. PENSIM is a pension policy simulation model that is being developed by the PSG to analyze lifetime coverage and adequacy issues related to employer-sponsored pension plans. The development of PENSIM has been funded since 1997 by the Office of Policy and Research at the Employee Benefits Security Administration of the U.S. Department of Labor. PENSIM produces a random sample of simulated life histories for 100,000 people in a birth cohort and for their spouses who may have been born in a different year. The members of the birth cohort experience demographic and economic events, the incidence and timing of which vary by age, gender, education, disability, and employment status. The types of life events that are modeled in PENSIM include: demographic events (birth, death); schooling events (leaving school at a certain age, receiving a certain educational credential); family events (marriage, divorce, childbirth); initial job placement; job mobility events (earnings increases while on a job, duration of a job, movement to a new job, or out of the labor force); pension events (becoming eligible for plan participation, choosing to participate, becoming vested, etc.); and retirement events. For our report, we specified a DB and DC pension plan, which the PSG entered into PENSIM to be used with the 1955 and 1970 birth cohorts to simulate pension benefits for the Baby Boom and Generation X. These simulations were conducted under both a sunset and no sunset pension scenario as well a scenario where Generation X only had access to DC pensions (see following discussion for more details). Our simulations assume a single type of DB pension plan for all workers covered by such a plan. This plan’s structure is similar to the most common type of DB pension plan in the private sector. In terms of structure, this plan has an eligibility requirement (consisting of a minimum age of 21 and 1 year of service) and 5 years cliff vesting. The plan’s normal retirement age is 62 for workers with any years of service, and it has an early retirement option, with early retirement benefits beginning at age 55 for workers with 10 years of service. If a worker chooses to retire early there is a linear early retirement reduction of 5 percent per year (e.g., if a worker retires at age 55, he would receive 65 percent of the normal retirement benefit). The plan pays a monthly benefit at retirement, rather than a lump sum. In terms of the calculation of benefits, the traditional DB plan calculates benefits using a final average pay formula, such as: X% * average Y years earnings at the end of career or when highest * years of service. Surveys of DB plans in the United States indicate that, typically, the percentage credit (X%) is in the range of 1-1.75 percent. For this report we chose 1.25 percent. The most common definition of final average pay is the high consecutive 5 years of earnings. Therefore, the formula that we use to calculate DB benefits is: 1.25% * average of high consecutive 5 years pay * years of service. In our simulations of DC plans, all individuals covered by a DC pension plan are covered by the same plan. This plan’s structure is similar to the most common type of DC pension plan in the private sector. In terms of structure, this plan has an eligibility requirement (consisting of a minimum age of 21 and 1 year of service) and 5 year graded vesting. At retirement, individuals annuitize their account balances, with married individuals purchasing a joint and one-half survivor annuity and single individuals purchasing a single life annuity. Employees can contribute up to 12 percent of their earnings and the employer match 50 percent of the employees’ contributions up to 5 percent. Employees can invest their contributions in their choice of equities and fixed income assets, where the fixed income assets will consist of Treasury bonds and corporate bonds. Employees who leave before retirement can choose to have their account balances rolled over into another retirement account. In our simulations, rollover decisions are based on the data in table 11. Assumptions also need to be made regarding participation and contribution rates, and asset allocation. Tables 8-11 provide information on the assumptions used for each of these factors. Table 8 provides data on participation rates by age and salary. Data on contribution rates by age and salary are shown in table 9. Table 10 provides data on average asset allocation rates by age and investment options. Data on the distribution of assets at termination by asset levels is shown in table 11. Our simulations considered several scenarios for pension benefits. One assumed that the sunset provision in the Economic Growth and Tax Relief Reconciliation Act (EGTRRA) of 2001 holds and the other that the provisions in EGTRRA, which raise the limits on both DB and DC plans, do not sunset. We also considered the scenario where the shift in coverage reached its extreme and Generation X only had access to DC plans. Our simulations of expected Social Security benefits consider two different scenarios for resolving the funding shortfall. One scenario assumes scheduled benefits are paid while payroll taxes are increased to levels that support those benefits. Our scheduled benefits scenario increases the payroll tax once and immediately by the amount of the OASDI actuarial deficit as a percent of payroll so that benefits under the current system can continue to be paid throughout the simulation period. The other scenario, the funded benefits scenario, assumes that benefits are reduced to levels supportable by current payroll tax rates. The benefit reductions used in this scenario reduce the primary insurance amount (PIA) formula factors by equal percentage point reductions (by 0.319 each year for 30 years) for those newly eligible in 2005, subjecting earnings across all segments of the PIA formula to the same reduction. Simulating retirement income almost 30 years into the future requires many assumptions and simplifications and, consequently, our simulations have a number of limitations. A primary limitation of our analysis is that our simulations do not include important components of retirement income such as personal savings, earnings in retirement, health benefits, and other public assistance programs such as SSI. Including personal savings might reduce retirement income for Generation X relative to retirement income for the Baby Boom if the post-1980 decline in personal savings rates continues. Including earnings in retirement might increase Generation X’s retirement income relative to the Boomers income if wages increase over time or if people in the future are more likely to work in retirement. From a distributional perspective, including personal savings would probably increase the upper quintile’s share of retirement income while including public assistance programs such as SSI would benefit the bottom of the distribution. Another component of well-being in retirement that we do not estimate are private and public health benefits. Including health benefits might reduce Generation X’s standard of living in retirement relative to the Baby Boom due to falling health benefits and rising health care costs over time An important assumption driving our results is that real wages grow over time. We assume real wages grow at 1.0 percent per year, following the 2001 Social Security Trustees Report’s intermediate assumption. If, instead, wages stagnate as in the 1980s and 1990s, then retirement income for Generation X relative to retirement income for the Baby Boom might be lower than our estimates. Another critical assumption is the relative rate of DB and DC pension coverage. Over the last 25 years pension coverage has been shifting from DB to DC pensions. However, due to the uncertainty in predicting future relative coverage rates, our simulations either assume a constant rate of DB and DC coverage over time or only DC coverage for Generation X. The likely outcome is somewhere in between. An important omission under the scheduled Social Security benefit scenario is the impact of higher taxes or general revenue transfers on other sources of retirement income. Increased taxes or general revenue transfers will most likely be necessary to pay Social Security benefits as scheduled under current law. Tax increases might reduce saving for retirement and general revenue transfers might reduce funding for other government retirement programs such as SSI, Medicare, or Medicaid. The impact of tax increases may be larger for Generation X than for the Baby Boom because they will pay higher taxes for more years. Another limitation is the sensitivity of estimated DC benefits to our assumptions about future rates of return. We assume individuals’ rates of returns vary randomly around average rates projected by the Office of the Chief Actuary at SSA. If average rates of return in the future are significantly different, then actual DC benefits could differ substantially from our simulations. While the model allows returns to vary stochastically by individuals, it cannot capture fluctuations in overall market rates of return. An ill timed stock market downturn could result in either generation’s DC benefits being significantly lower than simulated. Retirement income for Generation X could be more sensitive to future rates of return than retirement income for the Baby Boom, if the trend toward DC pensions continues. Another limiting assumption is that our simulations only include one kind of DB and DC plan, which clearly does not capture the full complexity of pension plans. We attempted to choose the characteristics of each to be typical of today’s pension plans. If they are not truly representative or if the characteristics of DB and DC plans change over time, then our results could be biased. In particular, the finding that the shift to DC plans only has a very modest effect on pension benefits may depend on our choice of plans. While educational attainment has been increasing over time, this is not captured by the simulations. Both generations are assumed to achieve the same level of education as 35- to 44-year olds in the 1997 Current Population Survey. Higher levels of education for Generation X could increase their retirement income relative to the Baby Boom. From a distributional perspective, the simulations are limited, in that they do not capture differences across the generations in the variation of earnings. By some measures, earnings disparity has been increasing over the last 20 years, which could potentially lead to more variation in retirement income for Generation X. The simulations assume the same cohort life expectancies as the 2001 Social Security Trustees Report’s intermediate cost projection. Marital status at age 62 is calibrated to unpublished projections from the SSA’s Office of the Chief Actuary. Assumed life expectancies may be too low, as some have argued that the Trustees underestimate future improvements in mortality rates. Increased life expectancies would reduce DC benefits in our simulations because retirees would have to pay higher prices when annuitizing their retirement accounts. Our simulations of retirement income do not take taxation into account. Incorporating taxes would not only lower disposable income, but would also reduce variation in income because federal tax rates are progressive and because only relatively higher income households are required to pay tax on their Social Security benefits. Finally, we are only able to simulate retirement income for two illustrative birth cohorts as opposed to entire generations. The 1955 and 1970 birth cohorts may not fully capture the experiences of the Baby Boom and Generation X, respectively. For our analysis of estimated retirement income, we used two different scenarios for the changes to the pension limits under EGTRRA. One assumed that the sunset provision in EGTRRA holds and the other that the provisions, which raise the limits on both DB and DC plans, do not sunset. The following tables show estimated retirement income under the no- sunset pension scenario. Extending pension contribution limits beyond 2010 increases real retirement income and replacement rates for Generation X relative to real retirement income and replacement rates for the Baby Boom. Table 12 shows the estimated median monthly household retirement income at age 62 under a scheduled (tax increase) Social Security scenario and a constant rate of DB and DC pension coverage. Estimated median monthly household retirement income at age 62 under a funded (benefit reduction) Social Security scenario and a constant rate of DB and DC pension coverage is shown in table 13. Table 14 shows the simulated median monthly household retirement income at age 62 under a funded (benefit reduction) Social Security scenario and Generation X having only DC pension coverage. Replacement rates for the Baby Boom and Generation X under the different Social Security and pension coverage scenarios are shown in table 15. The distribution of simulated retirement income is very similar across the generations and across scenarios. For both generations and in all scenarios, retirement income is estimated to vary widely, pension benefits are less evenly distributed than Social Security benefits, and the less educated, single women, and those without pensions have lower retirement incomes. Figures 16-20 and table 16 show the estimated distribution of retirement income for the Baby Boom assuming funded Social Security benefits, no extension of raised pension contribution limits beyond 2010, and a constant rate of DB and DC pension coverage over time. These are the same assumptions used for Generation X in figures 11-15 and table 7. We do not emphasize a comparison of the distributions across generations because our models do not capture differences across generations in the variation of earnings. By some measure earnings disparity has been increasing over the last 20 years, which may result in retirement income varying more in Generation X than in the Baby Boom. Figures 21-25 and table 17 show the estimated distribution of retirement income for Generation X assuming funded Social Security benefits, no extension of raised pension contribution limits beyond 2010, and all pensions are DC pensions. Figures 26-30 and table 18 show the estimated distribution of retirement income for Generation X assuming funded Social Security benefits, extension of raised pension contribution limits beyond 2010, and a constant rate of DB and DC pension coverage over time. Figures 31-35 and table 19 show the estimated distribution of retirement income for Generation X assuming scheduled Social Security benefits, no extension of raised pension contribution limits beyond 2010, and a constant rate of DB and DC pension coverage over time. In addition to those named above, the following individuals made significant contributions to this report: Michael J. Collins, Gordon Mermin, Janice Peterson, Brendan Cushing-Daniels, Barbara Alsip and Patrick DiBattista, Education, Workforce, and Income Security Issues; Grant Mallie, Applied Research and Methods; and Marylynn Sergent, Strategic Issues. Social Security Reform: Analysis of Reform Models Developed by the President’s Commission to Strengthen Social Security. GAO-03-310. Washington, D.C.: January 15, 2003. Social Security: Analysis of Issues and Selected Reform Proposals. GAO-03-376T. Washington, D.C.: January 15, 2003. Private Pensions: Participants Need Information on the Risks of Investing in Employer Securities and the Benefits of Diversification. GAO-02-943. Washington, D.C.: September 6, 2002. Private Pensions: Improving worker Coverage and Benefits. GAO-02-225. Washington, D.C.: April 9, 2002. Private Pensions: Key Issues to Consider Following the Enron Collapse. GAO-02-480T. Washington, D.C.: February 27, 2002. Social Security: Program’s Role in Helping Ensure Income Adequacy. GAO-02-62. Washington, D.C.: November 30, 2001. Private Pensions: Issues of Coverage and Increasing Contribution Limits for Defined Contribution Plans. GAO-01-846. Washington, D.C.: September 17, 2001. Retirement Savings: Opportunities to Improve DOL’s SAVER Act Campaign. GAO-01-634. Washington, D.C.: June 26, 2001. National Saving: Answers to Key Questions. GAO-01-591SP. Washington D.C.: June 1, 2001. Cash Balance Plans: Implications for Retirement Income. GAO/HEHS-00-207. Washington, D.C.: September 29, 2000. Private Pensions: Implications of Conversions to Cash Balance Plans. GAO/HEHS-00-185. Washington, D.C.: September 29, 2000. Social Security Reform: Implications for Private Pensions. GAO/HEHS-00-187. Washington, D.C.: September 14, 2000. Pension Plans: Characteristics of Persons in the Labor Force Without Pension Coverage. GAO/HEHS-00-131. Washington, D.C.: August 22, 2000. Social Security: Evaluating Reform Proposals. GAO/AIMD/HEHS-00-29. Washington, D.C.: November 4, 1999. Integrating Pensions and Social Security: Trends Since 1986 Tax Law Changes. GAO/HEHS-98-191R. Washington, D.C.: July 6, 1998. Social Security: Different Approaches for Addressing Program Solvency. GAO/HEHS-98-33. Washington, D.C.: July 22, 1998. 401(k) Pension Plans: Loan Provisions Enhance Participation But May Affect Income Security for Some. GAO/HEHS-98-5. Washington, D.C.: October 1, 1997. Retirement Income: Implications of Demographic Trends for Social Security and Pension Reform. GAO/HEHS-97-81. Washington, D.C.: July 11, 1997. | Today's workers will rely to a large extent on Social Security, private pensions, and personal wealth for their retirement income. But some analysts question whether these sources will provide sufficient retirement income to maintain workers' standards of living once they leave the labor force. Indeed, the Social Security trust funds are projected to become exhausted in 2042, at which time, unless action is taken, Social Security will not be able to pay scheduled benefits in full. To gain an understanding of what today's workers might expect to receive in terms of retirement income, GAO was asked to examine (1) how the personal wealth of Baby Boom (born between 1946 and 1964) and Generation X (born between 1965 and 1976) workers compare with what current retirees had at similar ages, (2) how workers from the Baby Boom and Generation X compare in terms of the pension and Social Security benefits they can expect to receive, and (3) the likely distribution of pension and Social Security benefits across workers within the Baby Boom and Generation X. Baby Boom and Generation X households headed by an individual aged 25 to 34 have greater accumulated assets, adjusted for inflation, than current retirees had when they were the same age, but also more debt. Most of the large increase in assets between current retirees and the Baby Boom is due to increased ownership and equity in housing. Contributions to defined contribution pension plans play a role in explaining the modest increase in assets between the Baby Boom and Generation X, in part, because GAO's data do not allow it to consider the value of benefits from defined benefit pension plans. Workers from Generation X are estimated to have similar levels of retirement income in real terms (adjusted for inflation) at age 62 as their counterparts in the Baby Boom, but Generation X may be able to replace a smaller percentage of their pre-retirement income. Whether Social Security benefits for Generation X are higher or lower than those for the Baby Boom will depend on how the Social Security funding shortfall is resolved. With regard to pensions, Generation X and the Baby Boom are estimated to have similar levels of pension income even with a continued shift from defined benefit to defined contribution pension coverage. Retirement income will vary within both Generation X and the Baby Boom households, and certain groups will be more likely to have lower retirement incomes. As one might expect, given significant variation in workers' earnings, if households were arrayed from lowest to highest in terms of estimated total retirement income, those in the top 20 percent would receive a substantially larger proportion of income compared with those in the bottom 20 percent. Retirement income is lower for the less educated and single women. |
DOD has reported to Congress since fiscal year 2004 on several items related to its training ranges in response to section 366 of the Bob Stump National Defense Authorization Act for Fiscal Year 2003. The act as subsequently amended required annual progress reports to be submitted at the same time as the President submitted the administration’s annual budget for fiscal years 2005 through 2018. The provision that we evaluate the plans submitted pursuant to section 366 within 90 days of receiving the report from DOD has also been extended through fiscal year 2018. In our prior reviews of DOD’s Sustainable Ranges Reports, we found that DOD did not address certain required elements when it initially submitted its comprehensive plan in 2004. Further, we noted that it took DOD some time to develop a plan consistent with the basic requirements of section 366. Over time, we found that as DOD reported annually on its progress in implementing its comprehensive plan, it continued to improve its Sustainable Ranges Reports, and it has reported on the actions it has taken in response to prior GAO recommendations. Specifically, in 2013 we reported that DOD had implemented all 13 of the recommendations we had made since 2004 for expanding and improving DOD’s reporting on sustainable ranges. Further, DOD has progressed from using four common goals and milestones to using seven shared goals for which the services have developed their own actions and milestones that are tailored to their missions. We have reported that these new goals and milestones are more quantifiable and now are associated with identified time frames. DOD’s 2017 Sustainable Ranges Report met the annual statutory reporting requirements to describe DOD’s progress in implementing its sustainable ranges plan and any actions taken or to be taken in addressing constraints caused by limitations on the use of military lands, marine areas, and airspace. In its 2017 report, DOD provided updates to the plan that were required by the act. These updates included: (1) proposals to enhance training range capabilities and address any shortfalls in current resources; (2) goals and milestones for tracking planned actions and measuring progress in the implementation of its training range sustainment plan; and (3) projected funding requirements for implementing its planned actions. In our review of DOD’s 2017 Sustainable Ranges Report, we found that, as required by statute, DOD reported on its proposals to enhance training range capabilities and address any shortfalls in resources. DOD developed these proposals by evaluating current and future training range requirements and the ability of current DOD resources to meet these requirements. In its 2017 report, DOD revalidated its 2015 individual range capability and encroachment assessments and the current and future military service training range requirements. To do so, DOD updated the report sections pertaining to each military service’s issues related to range capability, encroachment, and special interests to the military service. For instance, regarding the Marine Corps, the report noted, among other things, that the Marine Corps has identified the need for an aviation training range on the East Coast of the United States capable of supporting precision-guided munitions training. The report states that the Marine Corps selected the expansion of Townsend Bombing Range in Georgia as the best alternative for securing this East Coast capability after a thorough assessment of area capabilities. The report notes that a record of decision to expand Townsend was signed in January 2014, and that a formal airspace proposal supporting the land expansion has been submitted to the Federal Aviation Administration. The report further stated that full operational capability is now planned for December 2019. In its 2017 report, DOD also reported on seven evolving activities and emerging issues, all of which were reported in its 2016 report. These seven activities and issues were as follows: (1) new sustainable range initiative-related influences and actions; (2) budget reductions impacting range capability; (3) foreign investment and national security; (4) threatened and endangered and candidate species; (5) demand for electromagnetic spectrum; (6) continued growth in domestic use of unmanned aircraft systems; and (7) offshore energy. DOD’s 2017 report outlined some actions being taken to mitigate the challenges these issues may present for DOD test and training ranges. For example, in response to new sustainable range initiative-related influences, DOD responded to a recommendation in Senate Armed Services Committee Report 114-49 to include a review of the general capabilities, critical issues, and future capabilities necessary to support Special Operations Forces (SOF) range requirements. The 2017 Sustainable Ranges Report is the first to incorporate SOF-specific range issues. Foreign investment and its effects on national security continue to be an evolving issue faced by DOD. In an April 2016 report, we evaluated the extent to which DOD made progress in its efforts to assess the national security risks and effects of foreign encroachment. In that review, we found that DOD had made limited progress in addressing foreign encroachment on federally managed land since we had last reported on the subject in December 2014. We also found that DOD has begun to take some steps toward assessing the national security risks and effects of foreign encroachment, but had not yet fully implemented the recommendations from our December 2014 report, which were as follows: (1) that DOD should develop and implement guidance for conducting a risk assessment on foreign encroachment and (2) that DOD should collaborate with other federal agencies to obtain additional information on transactions near ranges. DOD concurred with both recommendations. According to the 2017 report, DOD is pursuing opportunities to obtain information related to foreign investment and transactions in proximity to DOD mission-essential locations from agencies with land management authority as well as conducting a risk assessment related to those locations. In addition, DOD reported that it is considering seeking legislative relief to enhance data-collection and data- sharing practices regarding foreign investment in the proximity of DOD mission-essential locations as an avenue to mitigate national security- related encroachment, and it has engaged the various federal land managers to expound on potential issues related to DOD concerns. In its 2017 Sustainable Ranges Report, DOD used goals and milestones to address the statutory requirement to describe its progress in implementing its comprehensive training range sustainment plan. DOD has seven goals, as follows, in support of this plan: (1) mitigate encroachment pressures on training activities from competing operating space; (2) mitigate electromagnetic spectrum competition; (3) meet military airspace challenges; (4) manage increasing military demand for range space; (5) address affects resulting from new energy infrastructure and renewable energy; (6) anticipate climate change effects; and (7) sustain excellence in environmental stewardship. Using these goals as a common framework, each military service developed its own milestones and needed actions for reaching those milestones. In DOD’s 2017 Sustainable Ranges Report, each service provided updates to its milestones and actions based on annual assessment data. The report included the following examples: The Army has ongoing actions to mitigate electromagnetic spectrum competition on ranges. For example, the Army reported that installation of fiber optic cabling has been completed at approximately 20 installations to support wireless networks and targeting control systems in order to mitigate electromagnetic spectrum interference on ranges. The Navy has ongoing interactions with Bureau of Ocean Energy Management state renewable energy task forces to support assessments of proposed wind energy developments to minimize effects on Navy and DOD offshore readiness. The Marine Corps has ongoing actions to engage in regulatory and legislative processes at the local, state, and national levels on issues that may affect range sustainability or readiness. The Marine Corps is also exploring partnerships to meet natural resource regulatory responsibilities. The Air Force is engaged in ongoing development of the Center Scheduling Enterprise flight scheduling system for use at Air Force Ranges. In addition, the Air Force is developing an interface between its flight scheduling system and the Army/Marine Corps Range Facility Management Support System, to facilitate scheduling across military services. In the 2017 Sustainable Ranges Report, DOD met the statutory requirement to track its progress in implementing the comprehensive plan by identifying the funding requirements needed to accomplish its goals. DOD delineated the following four funding categories to be used by the services to project their range sustainment efforts: (1) modernization and investment; (2) operations and maintenance; (3) environmental; and (4) encroachment. The funding requirements section of the 2017 report includes descriptions and specific examples for each funding category, as well as actual funding levels for fiscal year 2016 and requested funding levels for fiscal years 2017 through 2021. For example, the encroachment category is described as funding dedicated to actions optimizing accessibility to ranges by minimizing restrictions that do or could limit range activities, including outreach and buffer projects. Specific examples of encroachment funding include Army Compatible Use Buffer program administration and support and encroachment planning efforts. The report also provides an explanation of any fluctuations occurring over the 5-year funding period covered in the report. For example, the Air Force’s requested funding for the modernization and investment category fluctuated from $48.3 million in fiscal year 2017 to $236.8 million in fiscal year 2020 to $185.6 million in fiscal year 2021. The Air Force attributes this planned fluctuation to a decision to infuse funding in range infrastructure to research, develop, procure, and sustain advanced threat emitters, range communications/networks, and datalink systems, among other things. We are not making recommendations in this report. In oral comments on a draft of this report, DOD agreed with our findings and did not have any further comments. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force, and Commandant of the Marine Corps; and the Deputy Assistant Secretary of Defense for Readiness. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4523 or leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix. In addition to the contact named above, Maria Storts (Assistant Director), Kerstin Hudon, Liza Bartlett, Michael Silver, Alexandra Gonzalez, and John Wren made key contributions to this report. Military Training: DOD Met Annual Reporting Requirements in Its 2016 Sustainable Ranges Report. GAO-16-627 Washington, D.C.: June 15, 2016. Defense Infrastructure: DOD Has Made Limited Progress in Assessing Foreign Encroachment Risks on Federally Managed Land. GAO-16-381R. Washington, D.C.: April 13, 2016. Military Training: DOD’s Annual Sustainable Ranges Report Addressed Statutory Reporting Requirements. GAO-15-537. Washington, D.C.: June 17, 2015. Defense Infrastructure: Risk Assessment Needed to Identify If Foreign Encroachment Threatens Test and Training Ranges. GAO-15-149. Washington, D.C.: December 16, 2014. Climate Change Adaptation: DOD Can Improve Infrastructure Planning and Processes to Better Account for Potential Impacts. GAO-14-446. Washington D.C.: May 30, 2014. Military Training: DOD Met Annual Reporting Requirements for Its 2014 Sustainable Ranges Report. GAO-14-517. Washington, D.C.: May 9, 2014. Military Training: DOD Met Annual Reporting Requirements and Continued to Improve Its Sustainable Ranges Report. GAO-13-648. Washington, D.C.: July 9, 2013. Military Training: DOD Met Annual Reporting Requirements and Improved Its Sustainable Ranges Report. GAO-12-879R. Washington, D.C.: September 12, 2012. Military Training: DOD’s Report on the Sustainability of Training Ranges Meets Annual Reporting Requirements but Could Be Improved. GAO-12-13R. Washington, D.C.: October 19, 2011. Military Training: DOD Continues to Improve Its Report on the Sustainability of Training Ranges. GAO-10-977R. Washington, D.C.: September 14, 2010. Military Training: DOD’s Report on the Sustainability of Training Ranges Addresses Most of the Congressional Reporting Requirements and Continues to Improve with Each Annual Update. GAO-10-103R. Washington, D.C.: October 27, 2009. Improvement Continues in DOD’s Reporting on Sustainable Ranges, but Opportunities Exist to Improve Its Range Assessments and Comprehensive Plan. GAO-09-128R. Washington, D.C.: December 15, 2008. Military Training: Compliance with Environmental Laws Affects Some Training Activities, but DOD Has Not Made a Sound Business Case for Additional Environmental Exemptions. GAO-08-407. Washington, D.C.: March 7, 2008. Improvement Continues in DOD’s Reporting on Sustainable Ranges, but Opportunities Exist to Improve Its Range Assessments and Comprehensive Plan. GAO-08-10R. Washington, D.C.: October 11, 2007. Improvement Continues in DOD’s Reporting on Sustainable Ranges but Additional Time Is Needed to Fully Implement Key Initiatives. GAO-06-725R. Washington, D.C.: June 20, 2006. Military Training: Funding Requests for Joint Urban Operations Training and Facilities Should Be Based on Sound Strategy and Requirements. GAO-06-193. Washington, D.C.: December 8, 2005. Some Improvements Have Been Made in DOD’s Annual Training Range Reporting but It Still Fails to Fully Address Congressional Requirements. GAO-06-29R. Washington, D.C.: October 25, 2005. Military Training: Actions Needed to Enhance DOD’s Program to Transform Joint Training. GAO-05-548. Washington, D.C.: June 21, 2005. Military Training: Better Planning and Funding Priority Needed to Improve Conditions of Military Training Ranges. GAO-05-534. Washington, D.C.: June 10, 2005. Military Training: DOD Report on Training Ranges Does Not Fully Address Congressional Reporting Requirements. GAO-04-608. Washington, D.C.: June 4, 2004. Military Training: Implementation Strategy Needed to Increase Interagency Management for Endangered Species Affecting Training Ranges. GAO-03-976. Washington, D.C.: September 29, 2003. Military Training: DOD Approach to Managing Encroachment on Training Ranges Still Evolving. GAO-03-621T. Washington, D.C.: April 2, 2003. Military Training: DOD Lacks a Comprehensive Plan to Manage Encroachment on Training Ranges. GAO-02-614. Washington, D.C.: June 11, 2002. Military Training: DOD Needs a Comprehensive Plan to Manage Encroachment on Training Ranges. GAO-02-727T. Washington, D.C.: May 16, 2002. Military Training: Limitations Exist Overseas but Are Not Reflected in Readiness Reporting. GAO-02-525. Washington, D.C.: April 30, 2002. | DOD relies on its training ranges within the United States and overseas to help prepare its forces for combat and complex missions around the globe. Section 366 of the Bob Stump National Defense Authorization Act for Fiscal Year 2003 required DOD to submit a comprehensive plan on its efforts to address training constraints caused by limitations on the use of military lands, airspace, and marine areas in the United States and overseas for training. The act, as amended, further required DOD to provide annual progress reports on its efforts through 2018. The act also included a provision for GAO to submit annual evaluations of DOD's reports. This report assesses the extent to which DOD's 2017 Sustainable Ranges Report met statutory reporting requirements. To conduct this work, GAO reviewed DOD's 2017 report and compared it with the statutory reporting requirements. GAO also interviewed cognizant DOD and military service officials regarding preparations made to complete the 2017 report. The Department of Defense's (DOD) 2017 sustainable ranges report met the annual statutory reporting requirements to describe DOD's progress in implementing its plan to sustain training ranges and any additional actions taken or planned for addressing training constraints caused by limitations on the use of military lands, marine areas, and airspace. DOD's 2017 report provides updates to the plan required by the act, specifically (1) proposals to enhance training range capabilities and address any shortfalls; (2) goals and milestones to describe DOD's progress in implementing its comprehensive training range sustainment plan; and (3) projected funding requirements for each of the military services to implement their planned actions. In the report, DOD used goals and milestones to address the statutory requirement to describe its progress in implementing its comprehensive training range sustainment plan. Using these goals as a common framework, each military service developed its own milestones and needed actions for reaching those milestones. The report also identifies evolving activities and emerging issues related to training range sustainability, and it includes actions taken to mitigate them. GAO is not making recommendations in this report. DOD agreed with GAO's findings without further comment. |
To facilitate the public debate on FDIUS issues by improving existing government information, Congress enacted the Foreign Direct Investment and International Financial Data Improvements Act of 1990. This act required the Secretary of Commerce to submit an annual report addressing the history, scope, trends, and market concentrations of FDIUS, as well as its effects on the U.S. economy. In addition, the act provided for an exchange of business-confidential data between the Bureau of the Census and BEA and authorized BLS to have access to selected business-confidential BEA data. The purpose of this data sharing, as specified by the 1990 act, is to improve the quality of U.S. government data on FDIUS and to enhance analysts’ ability to assess the impact of that investment on the U.S. economy. BLS gives BEA access to publicly available macro-level, or aggregated, data on foreign-owned establishments generated from the BEA-BLS data link project. BEA data on foreign investment are collected on a consolidated firm or “enterprise” basis and reported under the industry category of the firm’s primary business and then linked with “establishment” or plant-level data collected by Census and by BLS. Since Census and BLS data are collected on an “establishment basis”—i.e., from individual commercial plants—the data are more likely to correlate to specific industry sectors. However, Census and BLS data do not identify foreign ownership. Linking BEA’s enterprise data with Census’ and BLS’ establishment data enables Commerce to report on the operations of U.S. affiliates of foreign firms in over 800 individual industries at the establishment level, as opposed to only 135 industries at the enterprise level. The establishment industry categories are disaggregated according to the Standard Industrial Classification (SIC) system. See figure 1 for an illustration of how one industry category within the manufacturing sector is disaggregated at the 2-, 3-, and 4-digit SIC levels. In addition to its three reports, Commerce published data from the first phase of the BEA-Census data exchange effort in June 1992. Commerce also published data from the BEA-Census data exchange effort in 1993, and again in 1994. BLS has published 1989, 1990, and 1991 data from the BEA-BLS data-sharing efforts in July 1992, October 1992, October 1993, and December 1994, respectively. The Foreign Direct Investment and International Financial Data Improvements Act of 1990 directs us to analyze and report on Commerce’s first three annual reports on FDIUS and review government efforts to improve the quality of FDIUS data. To assess how well Commerce fulfilled the reporting requirements of the 1990 act, we reviewed the 1993 and 1995 reports, with specific attention to Commerce’s coverage of the data requirements of the act, and to the overall quality of Commerce’s analysis of the potential effects of FDI on the U.S. economy. In addition, we evaluated the extent to which the 1993 and 1995 reports responded to the recommendations in our 1992 report. We used standard economic principles in our review and evaluation of the Commerce reports, with special attention to the chapters relating to the implications of FDIUS for U.S. trade, technology transfer, tax payment, employment, and banking issues. We relied on internal economists as well as an outside economist with expertise in FDIUS issues to carry out this evaluation. We also considered such factors as organizational structure, sufficiency of evidence for principal findings, coverage of the data requirements of the 1990 act, coverage of specific industry sectors, coverage of major investing countries, and use of relevant outside studies. We interviewed officials from BEA, Census, and BLS, as well as several outside experts, in the course of our review. We obtained written comments on a draft of this report from the Secretary of Commerce. They are discussed on page 15 and presented in their entirety in appendix IV. We also discussed the results of our work with program officials in BLS and incorporated their suggestions where appropriate. We performed our review in Washington, D.C., from January 1995 to August 1995 in accordance with generally accepted government auditing standards. See appendix V for a more detailed description of our objectives, scope, and methodology. We found that, taken together, Commerce’s 1993 and 1995 FDIUS reports largely fulfilled the requirements of the 1990 act and addressed the recommendations in our 1992 report on Commerce’s 1991 FDIUS report. In an effort to address changing public concerns about FDIUS and conserve agency resources, Commerce took a different approach to each of its three reports, according to Commerce officials. The reports included discussion of all the data requirements in the act for which data existed, such as comparing U.S. affiliates of foreign firms’ operations to those of other U.S. companies with respect to employment, exports and imports, and research and development (R&D) spending. With few exceptions, the two reports adequately presented Commerce’s analysis and findings regarding publicly debated FDIUS issues. Commerce has approached each of the three FDIUS reports differently in terms of organization and content. Commerce officials said these differences reflected the changing nature of public concerns about FDIUS and resource considerations within the agency. The August 1991 report highlighted the growth and characteristics of FDIUS in five industry sectors, including electronics, automotives (including automobile parts and components), steel, chemicals, and banking. It provided a description of the initial BEA-Census data link effort, which was not yet complete at the time of the report’s publication. In our 1992 review of Commerce’s 1991 FDIUS report, we recommended that Commerce’s subsequent FDIUS reports (1) provide an analysis that clearly distinguishes between costs and benefits derived from FDI and those derived from all foreign investment in the United States, (2) make greater use of available government studies and private sector data, and (3) provide more focused analyses of publicly debated questions regarding the effects of FDI in the U.S. economy. We subsequently determined that Commerce had adequately addressed our recommendations in its June 1993 report. Commerce’s June 1993 FDIUS report was organized by general issues of public policy concern rather than by industry sector. The report contained analyses of the implications of FDIUS for U.S. merchandise trade patterns, technology development and transfer, and corporate tax payment. It also presented a more detailed description of the operations of foreign-owned firms in the United States, as well as the results of the first phase of the data link project, based on data obtained through 1987 BEA and Census surveys. Further, the report included an extensive literature survey on the economic issues relating to FDIUS, such as technology transfer, exports and imports, and employment effects. The January 1995 report also highlighted general FDIUS issues of public policy concern rather than specific industry sectors. It included new data obtained through the BEA-BLS data link on occupational employment patterns in foreign-owned manufacturing establishments, and further analysis of the role of U.S. affiliates in U.S. merchandise trade. Commerce also reported the results of the comprehensive 1992 BEA benchmark survey of FDIUS and of ongoing BEA-Census and BEA-BLS data link projects. The primary factor that distinguished it from the 1993 report was that, with the exception of the introduction and chapter 6, all of the chapters of the 1995 report were reproductions of articles previously published in BEA’s monthly Survey of Current Business or contained data previously released in BLS publications. Commerce officials told us they believed this approach was a better use of limited staff and budget resources, given the cyclical nature of public concerns about FDIUS. Together, Commerce’s 1993 and 1995 FDIUS reports covered all of the data requirements specified by the 1990 act for which data existed and presented Commerce’s analysis and findings in a comprehensive manner. Specifically, Commerce presented extensive data on the history, scope, trends, and market concentrations of FDIUS. It also compared the operations of U.S. affiliates of foreign firms with those of other business enterprises in the United States with respect to employment, value added, productivity, R&D spending, exports and imports, profitability, taxes paid, and market share. The market share information was limited by Commerce’s data aggregation and confidentiality requirements. To the extent possible with existing data, Commerce reported on U.S. affiliates of foreign firms’ market concentration in various U.S. industries. Commerce relied primarily on sales data to estimate U.S. affiliates’ market share, but also examined U.S. affiliates’ share of U.S. gross domestic product (GDP) and employment. However, if Congress is concerned about the amount of foreign control exercised in specific product sectors, market sales data at a less aggregated level would be needed. According to Commerce officials, presenting more detailed sector data in these reports would likely compromise the confidentiality requirements of data collection agencies (see discussion in app. III). The one item called for in the law (section 3(c)(1)) but not addressed in the reports was information about investment incentives and services provided by state and local governments, including quasi-government entities. According to a BEA official, BEA attempts to collect this type of data through its survey of U.S. business enterprises newly acquired or established by foreign investors. However, a BEA official said that in many cases the data BEA receives are not complete. Therefore, the reliability of these data is questionable, and they are not published in Commerce’s FDIUS reports. These data are, however, publicly available upon request with a disclaimer from Commerce about their reliability. While our economic review showed Commerce’s analysis to be adequate, in some instances its interpretation of the effects of FDI in the U.S. economy were overly definitive. The conclusion in chapter 8 of Commerce’s 1995 report, with regard to the occupational employment patterns of foreign-owned manufacturing establishments, illustrates this problem. In the conclusion, the report stated that “on balance, foreign investment in high skill industries has a positive impact on the U.S. manufacturing labor market.” However, the statistical data presented showed that foreign and U.S.-owned firms were actually similar in the occupational distribution of their employees. A similar problem appears in Commerce’s discussion of technology transfer issues in chapter 6 of the 1993 report (see app. I). To reach a more definitive conclusion on the “positive impact” of FDIUS, the analysis would require a comparison of the observed scenario to the scenario that would have occurred in the absence of FDI, sometimes called the ’counterfactual’ scenario. While it is not possible to state with absolute certainty what would have happened (the counterfactual), this approach often highlights important assumptions about the cause and effect relationships between various factors. In some cases, Commerce did not formally include such scenarios in its analysis (for more details, see app. I). The 1995 report shifted emphasis away from the economic effects of FDI toward a general comparison of the operational behaviors of U.S. affiliates of foreign firms to those of U.S.-owned firms. Commerce used linked data to examine several characteristics of firms, including plant scale, plant and equipment expenditure, R&D spending, capital intensity, skill level, wage compensation, and labor productivity. These analyses are helpful in identifying the potential effects of FDI and in determining the industry sectors that have attracted the most foreign investment. Commerce’s ability to perform statistical analyses on FDIUS-related questions is currently limited by the level at which available data are aggregated. In its analyses, Commerce presently uses the 3-digit SIC-level data, and where possible, the 4-digit SIC level data. Data at the 4-digit SIC level are sufficiently detailed to address some issues, such as the role of U.S. affiliates of foreign firms in U.S. employment and GDP, but other issues related to market control and technology transfer could be more effectively addressed using more narrowly defined industry categories. Nevertheless, because some FDIUS questions are so complex, definitive conclusions would be hard to draw even if less aggregated data were available. For example, it would be difficult to determine empirically whether foreign firms invest in the United States with the intent of acquiring U.S. technology. A fuller understanding of the technology strategies employed by foreign investors in the United States would require continued research and debate. Determining the effects of FDI on U.S. imports and exports and on federal tax revenues would also be empirically difficult in some cases. Within the U.S. government, the Commerce Department is the principal source of U.S. government data on FDIUS. BEA collects FDIUS data directly from U.S. businesses through surveys, while the International Trade Administration (ITA) obtains its data primarily from news accounts of FDIUS transactions, according to Commerce. In addition, the Census Bureau within Commerce collects detailed information on the operations of nearly all U.S. businesses, both foreign and domestically owned. However, Census does not have systems established specifically to track FDIUS. Many other federal government entities collect data on foreign investment incidental to their overall missions. The Treasury Department is primarily responsible for collecting data on portfolio foreign investment, which includes bonds and other debt instruments as well as equity interest of less than 10 percent. The Departments of Agriculture, Energy, and Defense monitor certain aspects of foreign investment related to their particular industries. BEA obtains information on FDIUS through four survey questionnaires that cover a wide range of financial and operating data for U.S. affiliates of foreign firms. Data reported by survey respondents are classified according to BEA’s International Surveys Industry (ISI) classification system, which is based roughly on SIC categories. Beginning in 1990, BEA established steps to ensure compliance with its FDIUS surveys by strengthening survey follow-up procedures and increasing the number of staff devoted to survey follow-up. BEA’s FDIUS surveys require qualifying companies to disclose financial and operational data to BEA in accordance with the International Investment and Trade in Services Survey Act (Public Law 94-472, 22 U.S.C. 3101-3108, Oct. 11, 1976, as amended). The individual responses are considered business proprietary information, and only aggregated data are publicly released. These surveys cover such topics as balance of payments flows, U.S. business enterprises acquired or established by FDI, and the operations of U.S. affiliates of foreign firms. The ISI classification system that BEA uses in collecting data on U.S. affiliates of foreign firms is roughly based on the SIC system at the 3-digit level. To facilitate survey responses, the ISI system combines certain SIC industry categories based on typical company structures of U.S. affiliates. According to BEA officials, the ISI classifications correlate more closely with the organizational arrangement of U.S. affiliates than does the SIC system, which is designed for classifying individual establishments within an enterprise. In response to reduced compliance with reporting requirements among large company reporters in the 1987 benchmark and 1988 annual FDIUS surveys, BEA has instituted efforts to ensure U.S. affiliates of foreign firms’ compliance with its benchmark and annual FDIUS survey reporting requirements, according to a BEA official. By the end of November 1989—6 months after the May 31 reporting deadline—BEA had received 68 percent of the large company reports in the 1988 annual survey, compared with 84 percent received by the same time in the 1987 survey and 92 percent in the 1986 survey. BEA officials told us that one of the factors that may have contributed to this decline in compliance was the rapid (39 percent) growth in the numbers of qualified large companies to which BEA sent surveys between the survey covering 1986 and the survey covering 1988.They said that BEA’s survey follow-up procedures and staff resources at the time were not sufficient to manage the growing volume of potential reporters. Beginning with the annual survey covering the year 1989, BEA’s International Investment Division (IID), together with Commerce’s Office of the General Counsel (OGC), undertook a concerted effort to tighten procedures and ensure U.S. affiliates’ compliance with the 1989 survey and subsequent surveys. For example, “repeat offenders” (those large companies that were late in reporting in both the 1988 survey and the 1989 survey) were sent a letter from Commerce’s OGC in place of IID’s standard follow-up letter. In addition, IID and OGC accelerated their telephone follow-up for late reporters. Further, Commerce carried out standard compliance procedures earlier in the processing cycle compared with previous years. A BEA official also told us that in fiscal year 1991 Congress appropriated increased funding to BEA for survey compliance efforts. As a result, BEA now has three full-time staff devoted primarily to FDIUS survey follow-up efforts. This official explained that, prior to the 1991 funding increase, each survey editor was expected to conduct his own follow-up work. In early 1990, BEA developed indicators to measure one key element of compliance—the timeliness of reporting by large company respondents. The indicators show (1) the cumulative number of reports received by BEA, on a monthly basis, over the 11-month period following the annual survey mailing in March, and (2) the cumulative dollar value of the assets associated with those reporting companies, for the same period. Since BEA began implementing steps to address the reduced compliance with the 1987 and 1988 surveys, the timeliness of reporting on subsequent surveys has returned to acceptable levels, according to BEA officials. For example, the percentage of reports received within 6 months of the May 31 reporting deadline increased from 68 percent for the 1988 survey, to 92 percent for the 1989 survey (for which BEA first tightened its compliance procedures), and to 96 percent for the 1993 survey; while the cumulative value of the assets associated with those reporting companies increased from 69 percent for the 1988 survey, to 92 percent for the 1989 survey, and to 98 percent for the 1993 survey. Based on these data, BEA officials believe that their efforts to maintain high compliance rates have had a positive, measurable impact on the timeliness of reporting by large companies. BEA has established other systems to improve its surveys and data management processes, a BEA official told us. BEA has a continuous process to improve the quality of its survey forms, which includes proposing changes to the forms, soliciting feedback from survey users and respondents through a series of meetings and discussions, publishing a request for public comment on proposals, and finally, submitting proposals to the Office of Management and Budget (OMB) for formal review and clearance. This official said that BEA has also instituted an office-wide “best practices” initiative to ensure the accuracy of the data it produces and tabulates. Formal “best practices” standards are now part of each BEA staff member’s work plan. The BEA-Census and BEA-BLS data link projects, initiated under the 1990 act, have greatly improved the amount and quality of data available about FDIUS. The data have enabled Commerce to produce more detailed analyses of FDIUS and to draw more meaningful comparisons between the activities of U.S. affiliates of foreign firms and those of U.S. firms than previous data allowed. For example, by comparing the market and employment shares of foreign-owned establishments with U.S. establishments, Commerce has been able to respond to concerns about the possibility that foreign investors might be acquiring a disproportionate level of ownership in certain U.S. industries. Thus far, the data link project between BEA and Census has generated data covering the number, employment, payroll, and shipments or sales of foreign-owned establishments in 1987 and foreign-owned manufacturing establishments’ operations in 1988, 1989, 1990, and 1991. Commerce’s FDIUS reports have included the results of the data links for 1987, 1989, and 1990. The BEA-BLS data link project has generated data on the employment and wages of foreign-owned establishments in all industries in 1989 through 1991, as well as the occupational employment of foreign-owned manufacturing establishments in 1989, which was included in Commerce’s 1995 report. Data provided by Commerce and BLS officials show that the data link projects have been carried out at an average annual cost of about $1.6 million. According to BEA officials, although BEA does not have a separate budget line item for the BEA-Census and BEA-BLS data link projects and does not separately track costs incurred on these projects, BEA officials estimate that BEA’s average annual cost of carrying out the projects was about $1 million for 1991 through 1995. Of this amount, an average of $300,000 per year was paid by BEA to Census, to reimburse Census for its costs associated with the project. The average annual budget for BLS to perform the BEA-BLS data link project was slightly less than $600,000 between 1991 and 1995. A major achievement of the two data link projects is that they have produced significant and extensive new data without causing any increase in companies’ reporting burdens. According to Commerce officials, several opportunities exist to improve FDIUS data sharing. These include expanding the BEA-Census data link project to include other data items and attempting to resolve differences between BLS’ and Census’ establishment databases. However, resource constraints, as well as other factors related to the protection of business confidential data, may limit the agencies’ ability to pursue such activities. (See app. III for a discussion of these factors.) Based on our review of the Commerce Department’s FDIUS reports and data exchange activities, we found that the implementation of the 1990 act has improved the quantity and quality of U.S. government FDIUS data to a great extent. The data link operations mandated by the law produced significant improvements in publicly available FDIUS data, according to BEA, Census, and BLS officials. This was done at an average annual cost of about $1.6 million. These officials told us they believe the benefits of the data link have been well worth the investment. The new data are available to the public through several means, including regularly published Commerce Department and BLS reports, Commerce’s National Trade Data Bank, and annually produced computer disks that can be purchased from Commerce and BLS. The Commerce Department reports mandated by the law have provided a regular venue for disseminating new FDIUS data and current analysis of publicly debated questions relating to the effects of FDIUS on the U.S. economy. With each publication, the Commerce reports’ coverage, analysis, and organization have provided a growing body of quality information on FDIUS. The most recent report, issued in 1995, presented a large amount of data on the characteristics of U.S. affiliates of foreign firms, including extensive use of tables and graphics. It also included the new data obtained from BEA’s 1992 benchmark survey, the BEA-Census data link, and the BEA-BLS data link. Overall, it provided useful information for further analysis by Commerce and other analysts about the potential economic effects of FDI on the U.S. economy. In our view, compiling previously published articles and data is a reasonable approach to fulfilling the reporting requirements of the 1990 act in a period of government budgetary constraint and when FDIUS issues have been extensively covered in BEA’s Survey of Current Business and in periodic joint BEA-Census and BLS publications. We received written comments on a draft of this report from the Secretary of Commerce. These comments were of a technical nature, and we have incorporated changes in the report where appropriate. A copy of the Secretary’s comments is presented in appendix IV. We also discussed the draft report with program officials in BLS and incorporated their suggestions where appropriate. We are providing copies of this report to the Secretary of Commerce and other interested parties. We will make copies available to other parties upon request. Major contributors to this report are listed in appendix VI. If you have any questions concerning this report, please contact me on (202) 512-4812. Although the findings presented in Commerce’s 1993 and 1995 reports were generally reasonable and credible, we found several factors that limited some aspects of the reports’ analysis. In some cases, Commerce did not clearly acknowledge that firm conclusions could not be drawn without the use of “counterfactual scenarios” to account for economic conditions in the absence of foreign direct investment (FDI). We also found that Commerce’s statements regarding the positive impact of U.S. affiliates of foreign firms’ research and development (R&D) spending did not acknowledge the possibility that technological developments resulting from R&D do not necessarily benefit the U.S. economy. Finally, in one case, we noted statements in the 1993 report that seemed contradictory. In the 1993 report, Commerce sometimes reached conclusions about the effects of FDI on the U.S. economy without acknowledging possible “counterfactual scenarios,” i.e., what would have happened in the absence of FDIUS. Such scenarios are often used in discussions of the effects of policy changes. The difference between the observed scenario—when FDI is present—and the counterfactual scenario—when FDI is not—would constitute the effects of FDI. While it is not possible to state with any certainty what would have happened in the absence of FDI, the counterfactual approach can highlight important assumptions about the cause and effect relationships between various economic factors. One example of where a counterfactual scenario would have been useful is in chapter 8 of Commerce’s 1995 report, which addresses the occupational employment patterns of foreign-owned manufacturing establishments. In the conclusion of this chapter, the author stated that “on balance, foreign investment in high skill industries has a positive impact on the U.S. manufacturing labor market.” However, the statistical data presented showed that foreign and U.S.-owned firms were actually similar in the occupational distribution of their employees. Without knowledge of U.S. labor market conditions in the absence of FDI, one cannot draw definitive conclusions about the positive impact of FDI on the U.S. labor market. An example of how this discussion of counterfactual scenarios can be used effectively appeared in chapter 6 of the 1995 report. The author pointed out that the trade deficits of U.S. affiliates of foreign firms amounted to more than half of the total amount of the U.S. merchandise trade deficit in recent years, and that most of the U.S. affiliates’ deficit was accounted for by wholesale trade affiliates rather than manufacturing affiliates. The author concluded that the overall “effect” of those wholesale trade affiliates on trade flows was unclear: on the one hand, many of their imports probably would have been brought into the country by unaffiliated U.S. wholesalers, even in the absence of U.S. affiliates; on the other hand, U.S. affiliates may have allowed foreign parent companies to expand their exports to the United States. The author’s discussion of scenarios that might have occurred in the absence of FDI improves our understanding of the possible effects of FDI on the U.S. economy. Commerce’s analyses of the implications of FDIUS for the development and transfer of U.S. technology included an extensive amount of relevant data. Particularly useful were Commerce’s analyses of the market concentration of U.S. affiliates of foreign firms in high-technology sectors, and U.S. affiliates’ royalty and licensing fee payments to foreign parent companies. We found that Commerce’s conclusions about recent patterns of R&D spending by U.S. affiliates of foreign firms were overly definitive. Commerce concluded that “U.S. affiliates have contributed to U.S. technological development, dramatically increasing their R&D spending in the United States over the past ten years.” In our review of the economic literature on the motives of multinationals’ FDI, we found that higher R&D spending by U.S. affiliates does not necessarily lead to a higher technology development in the U.S. economy. Sometimes foreign firms locate in the United States simply to monitor the technology developments of other firms in this country. Even if R&D funds are dedicated to technology development, there is no guarantee that such spending will ultimately benefit the U.S. economy. In one instance in the 1993 report, the authors made two statements that seemed to be contradictory. On one hand, Commerce presented evidence to suggest that some foreign firms have used their affiliates to gain U.S.-developed “critical technologies” and to displace U.S. firms. Commerce’s evidence was based on several case studies of certain high-technology industries conducted by experts within and outside the Commerce Department. On the other hand, Commerce concluded from its own systematic data analysis that there was little evidence that foreign acquisitions of small, U.S. high-technology firms had resulted in large scale technology transfer abroad. Rather, Commerce said that the data suggest U.S. affiliates of foreign firms were contributing positively to U.S. R&D investment and technological development, and that the R&D spending patterns of U.S. affiliates were similar to those of domestic firms. In our view, the evidence Commerce cited to support its broad statement that U.S. affiliates have contributed to U.S. technological development was not sufficiently strong to support the overall conclusion, because Commerce’s analysis did not include discussion of possible counterfactual scenarios. Due to the complexity of the technology issues and the limitations of the SIC data classification system, some questions cannot be conclusively answered at this time. For example, Commerce’s effort to evaluate FDI’s presence in sectors that engaged substantially in the development of critical technologies was hampered by the aggregated level of available data. To describe the activities of companies involved in the production of critical technologies, the data would have to be significantly disaggregated—beyond the 3- or 4-digit Standard Industrial Classification (SIC) code levels. Neither the 4-digit SIC level nor the “DOC-3” data developed by Commerce is sufficiently detailed by industry to address questions about the activities of U.S. affiliates of foreign firms in U.S. critical technology sectors. Commerce used a modified version of the DOC-3 definition in its analysis. Based roughly on 3-digit SIC codes, this definition includes only broad industry groups such as “industrial chemicals and synthetics,” “computers and office machines,” “electronic components,” “instruments and related products,” and “other transportation equipment.” Some of the products included in the definition are actually low-technology products. For example, the “computers and office machines” category includes such products as scales, balances, cash registers, and adding machines; and “other transportation equipment” includes ship and boat building and railroad equipment. Similar limitations exist with the 4-digit SIC data. For example, the 4-digit “electrical machinery, equipment, and supplies, not elsewhere classified” (SIC 3629) category includes both high-technology items, such as “atom smashers” (particle accelerators) and “cyclotrons,” and low-technology items, such as “Christmas tree lighting sets.” BEA collects data on foreign direct investment in the United States (FDIUS) through four survey questionnaires that require U.S. affiliates of foreign firms to disclose a broad range of financial and operating data. The most comprehensive of these surveys is BEA’s benchmark survey, which is required by law to be conducted every 5 years. The other three FDIUS surveys collect data on the status of newly acquired or established U.S. affiliates, the current operations of U.S. affiliates, and on balance of payments flows between U.S. affiliates and their foreign parents. The International Investment and Trade in Services Survey Act (P.L. 94-472, 22 U.S.C. 3101 to 3108, as amended), requires BEA to conduct the benchmark survey of FDIUS (or census) at least once every 5 years. The most comprehensive of the BEA surveys, it collects both financial and operating data and balance of payments data for the entire universe of U.S. affiliates of foreign firms with more than $1 million in total assets, sales, or net income during the benchmark year. It includes balance sheets and income statements; measures of employment and employee compensation; sales of goods and services; property, plant, and equipment; merchandise trade; research and development expenditures; and, for selected items, data broken down by state. Although it is normally conducted every 5 years, the 1987 benchmark survey was conducted after a 7-year interval in order to coincide with the Census Bureau’s quinquennial economic census. The purpose of this adjustment was to facilitate the link between BEA’s enterprise data and the Census Bureau’s establishment data, and to enhance their analytical usefulness, according to Commerce. BEA’s annual sample survey of FDIUS collects data on the overall operations of nonbank U.S. affiliates of foreign companies. This survey provides annual updates of the financial and operating data collected in BEA’s benchmark surveys. A key measure is the value of total assets of U.S. affiliates at year end. The annual and the benchmark surveys are the only BEA sources of foreign investment data by state. Data from the annual FDIUS survey have been available since 1977. Data collected by BEA’s survey of U.S. business enterprises acquired or established by foreign direct investors is complied on an annual basis. This data series covers new direct investments and collects data on the associated transactions only for the year in which the new investments were made and includes all financing, including local borrowing in the United States. Data have been available since 1979. An adjunct form is to be filed by persons who act as intermediaries, such as attorneys or accountants, for new direct investment transactions and is used only to obtain the names and addresses of the principals to the transactions so that the primary form can be mailed to the appropriate person. BEA’s survey on the U.S. foreign direct investment position and balance of payments flows is a quarterly sample survey that collects information on transactions between U.S. affiliates of foreign firms and their foreign parent companies for inclusion in the U.S. balance of payments accounts, the national income and product accounts, and in calculating the inward FDI and international investment position of the United States. The purpose of this survey is to monitor capital flows, income, fees and royalties, and other services transactions between foreign parent companies and their U.S. affiliates. Data from this survey have been available since 1950. The Foreign Direct Investment and International Financial Data Improvements Act of 1990 (Public Law 101-533) authorized BEA to share business-confidential data on FDIUS with Census and BLS, and Census to share business-confidential data with BEA in order to improve the quantity and quality of data on FDIUS. In accordance with the 1990 act, BEA enterprise data has been linked with Census and BLS establishment data and has generated more detailed information on the characteristics and operations of U.S. affiliates of foreign firms in the United States than was previously available. BEA, Census, and BLS officials said there were opportunities for further collaboration to improve the quality and quantity of data available on FDIUS, but certain resource limitations and other factors related to protecting business-confidential information may inhibit their fulfillment. The BEA-Census data link project involves linking BEA’s business-confidential enterprise data on U.S. affiliates of foreign firms—collected at the 3-digit International Surveys Industry (ISI) code level—with Census’ business-confidential establishment data collected at the 4-digit SIC level. Thus far, the data link project between BEA and Census has generated data covering foreign-owned U.S. establishments for 1987-91. For 1987, both manufacturing and nonmanufacturing establishments were covered and data were provided on the number, employment, payroll, and shipments or sales of the foreign-owned establishments. For 1988-91, only manufacturing establishments were covered, but more data items were obtained—including data on the number, value added, shipments, employment, total employee compensation, employee benefits, hourly wage rates of production workers, cost of materials and energy used, inventories by state of fabrication, and expenditures on new plant and equipment of foreign-owned establishments. The data were obtained by matching enterprise data collected in BEA’s 1987 Benchmark and 1988-91 Annual Surveys of Foreign Direct Investment in the United States to establishment data from Census’ 1987 Economic Censuses, 1987-91 Report of Organization surveys, and 1988-91 Annual Survey of Manufacturers (ASM), as well as establishment data Census obtains from administrative or other statistical agencies. The Census establishments that linked to BEA’s enterprises in the most recent BEA-Census data link (1991) accounted for 98 percent of the employment by foreign-owned manufacturing firms in the United States. The BEA-Census data link is a technically complex process requiring both automated and manual procedures. The following is a simplified explanation of how the BEA-Census link is conducted. Figure III.1 illustrates the process at a simplified level. Valid EIN? Valid U.S. affiliate? resolved? (Figure notes on next page) Verifying matched cases includes developing and checking preliminary data tables to verify the accuracy of the linked data. The BEA-Census data link project begins when BEA sends Census a computer tape containing micro-level data on foreign-owned enterprises. The data tape contains key information about the enterprise, such as its name, address, and EIN. The tape also includes other descriptive items for the enterprise, such as the number of its employees and its sales in dollars. Census then attempts to match by computer BEA’s enterprise EINs with EINs listed in Census’ Standard Statistical Establishment List (SSEL), a computerized list covering all U.S. companies and their establishments—about 9.5 million single and multi-unit companies. The computerized EIN matching operation has three possible outcomes: (1) an enterprise links to one or more of Census’ establishments, (2) two or more enterprises link to one or more establishments, or (3) an enterprise does not link to any of Census’ establishments. For those enterprises that do not link (outcome 3), Census and BEA conduct further research. For those enterprises that do link (outcome 1 or 2), Census and BEA verify the accuracy of the matched cases. Once the computerized link has been completed, Census must identify those cases in which BEA’s enterprise did not link to any of Census’ establishments. This nonlinkage may have occurred because the original EIN that BEA provided to Census for that enterprise on the data tape was incorrect—perhaps the enterprise reported the EIN incorrectly to BEA. Whatever the reason, Census tries to identify the correct EIN by researching the enterprise on the SSEL. If Census is unable to identify the correct EIN, Census forwards the case to BEA for further research. The research at BEA often entails checking historical or archived information in various BEA files to ensure that the enterprise is a valid U.S. affiliate of a foreign firm, i.e., the enterprise is at least 10 percent foreign owned and the EIN for that enterprise in BEA’s files is valid. If the enterprise is not a valid U.S. affiliate, the enterprise is eliminated from inclusion in the data link. If the enterprise is a valid U.S. affiliate, BEA obtains the necessary information to allow the enterprise to be matched correctly to Census’ establishments. BEA then sends this information back to Census. Generally, the research at BEA on unmatched cases is carried on concurrently as the project moves into the reconciliation phase. For the data link covering 1992, 243 cases were referred to BEA for further research. Depending on the research required, each case referred can take up to 15 days to research, according to BEA officials. At this point the cases that did link—those in which a BEA enterprise linked to at least one Census establishment—must be reconciled. A BEA official—who has been sworn in as a Census agent—works with Census to help evaluate whether those cases that linked were correctly matched and to reconcile them if they were not. The reconciliation process is very time-consuming and intensive. For example, for the data link covering 1992, about 1,700 cases were reconciled because of data discrepancies. According to BEA and Census officials, the reconciliation process generally takes about 10 weeks to complete. The process requires Census and BEA to compare the employment count for a given BEA enterprise with the aggregate employment count for the Census establishments that were linked to the BEA enterprise. If there is a large difference between the BEA and Census employment counts—generally over 100 employees—Census and BEA officials must research each case further. To do so, Census and BEA officials compare the data provided by BEA on each enterprise’s name, address or location, and employment with Census’ SSEL data. In general, Census and the BEA official are able to resolve discrepancies between BEA and Census data by further researching the cases, according to BEA and Census officials. However, when the discrepancy for some linked cases cannot be resolved, those cases must be returned to BEA for further research. The research conducted on mismatched enterprises is similar to that previously described for unmatched enterprises, except that BEA may contact the enterprise directly to assure an accurate link; research may sometimes take up to 90 days to complete. Once Census and BEA have reconciled and correctly matched BEA’s foreign-owned enterprises with Census’ establishments, all of the linked cases must be reverified. Census and BEA officials again work together to verify their judgement that linked cases have been reconciled and correctly matched. BEA also verifies the accuracy of the linked data by generating preliminary tables to check both the consistency with other data on FDIUS (both from BEA and from other sources) and the internal consistency within the preliminary tables themselves. Developing and checking the tables usually takes BEA about 2 weeks. The data link is complete once the individual linked cases and the table data have been verified. Developing and publishing tables covering the data generated by the data link on an annual basis is also a joint Census-BEA project. For example, tables generated from the 1987 data link project provided over 600 pages of data tables on FDIUS disaggregated by industry, country, and state. BEA designs and writes the computer programs for the tables and generates the data with assistance from Census. Census performs disclosure avoidance review on each table to ensure that no confidential data are disclosed; in many cases, data in tables must be suppressed before the tables can be published. BEA also checks the tables for their accuracy by comparing the table data with other data on FDIUS. The process to design the tables, generate the data, and perform the necessary disclosure avoidance review, and check the tables can take as long as 7 months. According to Commerce officials, the need to suppress certain data elements so as not to compromise the confidentiality of the data is one of the problems that the agencies face in making the more detailed, 4-digit SIC level data available, because it limits the amount of data that can be published. Disclosure avoidance review is the process of suppressing data from publications so as to avoid disclosing confidential data. level of industry detail than is available from the BEA data alone. The BEA-BLS data link project has generated data on the fourth-quarter employment and wages of foreign-owned establishments in 1989, 1990, and 1991, as well as the occupational employment of foreign-owned manufacturing establishments in 1989. Data for the first set of data link projects covering employment and wages were derived by matching BEA’s enterprise data from its Annual Survey of Foreign Direct Investment in the United States with BLS’ establishment data from its Covered Employment and Wages (ES-202) Program—covering approximately 6.5 million U.S. establishments. Data for the data link covering occupational employment were derived by matching the linked manufacturing establishment data from 1989 with other 1989 establishment data from BLS’ Occupational Employment Statistics Survey. The BEA-BLS data link process is very similar to that of the BEA-Census data link project. However, BLS reconciles and verifies the BEA-BLS data link on its own, with input from BEA. As with the BEA-Census data link project, the BEA-BLS data link project is designed to link BEA’s 3-digit ISI data on enterprises with BLS’ 4-digit SIC data on establishments. The data link project begins with BEA sending its data tape containing business-confidential enterprise data and key identifiers (such as the enterprises’ EIN, name, address, employment, etc.) to BLS (see fig. III.1). BLS then is to perform a computerized link of BEA’s data with BLS’ business-confidential establishment data. Once BLS has generated a computerized link of the two data sets, it attempts to verify and reconcile discrepancies in the data. In general, BLS will try to resolve these discrepancies on its own, often using secondary sources such as the Directory of Corporate Affiliations or Moody’s Industrial Manual to help explain why mismatches may have occurred and to identify cases that should have matched. However, when discrepancies cannot be easily explained, BLS sometimes sends questions about unmatched or mismatched cases to BEA for further research. BLS then is to verify that all cases included in the link have been matched correctly and develop tables for data publication. The linked establishments from the 1991 employment and wages BEA-BLS data link accounted for about 99 percent of the employment by all U.S. affiliates of foreign firms. Like Census, BLS must perform disclosure avoidance review on each table before the data are published. According to Commerce officials, several opportunities exist for improving FDIUS data by expanding the BEA-Census data link project. Specifically, opportunities exist to link BEA data with Census product-level and export data, as well as with Census longitudinal data on manufacturing establishments’ operations. However, budget and resource constraints may limit the agencies’ ability to pursue these projects. According to BEA officials, the two agencies are currently evaluating the possibility of linking BEA’s enterprise data with product- and product class-level data obtained by Census through its economic censuses and ASM survey. A link with product-level data would enable Commerce to provide data on specific products or product classes produced by foreign-owned establishments at a much greater level of detail than either BEA’s 3-digit ISI industry data or the 4-digit SIC industry data currently produced under the data link project. According to agency officials, one potential problem with linking these highly detailed product-level data is that much of the data would be business-confidential and would need to be suppressed. However, the product data would enable BEA to study with greater accuracy and precision issues such as whether U.S. affiliates of foreign firms are targeting high-technology industries. Another opportunity exists for a data link with Census’ exporter database. Census developed this database by matching information on exports from the U.S. Customs Service with individual establishments listed on Census’ SSEL register. For 1987, Census was able to attribute approximately 60 to 70 percent of all U.S. exports to establishments on Census’ register. Census is now constructing a 1992 exporter database that could potentially be linked to BEA’s enterprise data. Such a link would generate much more detailed, precise data on exports by type of product than BEA has because U.S. exports are reported to Customs according to a 10-digit schedule that classifies commodities. The more detailed data could help shed light on how U.S. affiliates of foreign firms contribute to U.S. exports. Commerce officials also told us they anticipate developing a link between Census’ Center for Economic Studies longitudinal database and BEA data at some point in the future. Such a link would allow Commerce to analyze individual manufacturing establishments’ operations over time. For example, Commerce could study changes in establishments’ employment, value added, shipments, etc., once the establishments become foreign owned. Commerce officials stated that budget and staff constraints, as well as the unavailability of funding, may limit the agencies’ ability to pursue these additional data link projects. For example, BEA officials emphasized that, to date, no funding has been allocated to pursue a data link to Census’ export or longitudinal databases. According to Commerce and Labor officials, certain restrictions and other factors continue to limit the extent to which federal agencies exchange and use FDIUS data on an ongoing basis. Specifically, restrictions on the use and disclosure of confidential data obtained from IRS limit reconciliation and analysis of data generated from the BEA-Census data link project. In addition, various factors restrict BLS’ data sharing with BEA. While Public Law 101-533 provides a mechanism for agencies to resolve data access issues, BEA and BLS have not used this mechanism to resolve issues related to BEA access to BLS’ business-confidential establishment data. Although both Census and BEA are permitted to request and obtain confidential information directly from IRS, restrictions on the use and disclosure of such data prevent either BEA or Census from sharing the data with each other. According to BEA officials, these restrictions prevent BEA from comparing, analyzing, or verifying data in its own databases with data on individual establishments that Census obtains from IRS. Section 401 (a) of title 13 U.S.C. states that Census may share with BEA only data collected directly from respondents by the Census Bureau itself. In addition, IRS regulations (26 CFR 301.6103(j)(1)-1), which describe the projects for which access to IRS data is permitted, do not specifically mention the BEA-Census FDI link project. IRS has stated that BEA may not have access to the IRS data on the FDI data link files until it has revised its regulations to specifically mention the data link project, according to BEA officials. Therefore, Census cannot disclose to BEA any data Census has obtained directly from IRS until Census has verified the data through its own surveys. According to BEA officials, Census and IRS are currently developing an agreement to modify IRS’ implementing regulations for title 26 of the United States Code so that BEA staff who are sworn Census employees may be granted access to IRS data contained in Census’ files. The agencies do not anticipate that this action will require any legislative changes in either title 13 or title 26 of the United States Code. BEA would like access to BLS’ business-confidential establishment data to evaluate differences between the BLS’ and Census’ establishment data bases. However, according to BLS officials, BLS has pledged not to disclose any of the business-confidential employment and wage data it obtains from the states under cooperative agreements. BLS officials told us that BLS would have to get permission from each of the states before any such data could be released to BEA, or any other agency. Although Public Law 101-533 neither prohibits nor requires BEA access to BLS’ business-confidential establishment data, section 8(e)(2) of Public Law 101-533 states that the Director of the Office of Management and Budget shall be responsible for resolving questions on access to information with regard to any exchange of information between BEA and BLS. At this time, the agencies have not requested mediation on these issues from OMB. The Foreign Direct Investment and International Financial Data Improvements Act of 1990 directs us to analyze and report on Commerce’s first three annual reports on FDIUS and review government efforts to improve the quality of FDIUS data. Specifically, our objectives were to (1) assess the extent to which Commerce’s second and third reports—issued in 1993 and 1995—fulfilled the requirements of the 1990 act and addressed the recommendations in our 1992 review; (2) review the process by which federal agencies collect FDI data; (3) review the status and processes of the data exchanges, or links, initiated by the 1990 act between the Commerce Department’s Bureau of Economic Analysis (BEA) and its Bureau of the Census and between BEA and the Labor Department’s Bureau of Labor Statistics (BLS); and (4) evaluate the extent to which implementation of the act has brought about the intended improvements in public information on FDI in the United States. To assess how well Commerce fulfilled the reporting requirements of the 1990 act, we reviewed the 1993 and 1995 reports, with specific attention to Commerce’s coverage of the data requirements of the act, and to the overall quality of Commerce’s analysis of the potential effects of FDI on the U.S. economy. In addition, we evaluated the extent to which the 1993 and 1995 reports responded to the recommendations in our 1992 report. We used standard economic principles in our review and evaluation of the Commerce reports, with special attention to the chapters relating to the implications of FDIUS for U.S. trade, technology transfer, tax payment, employment, and banking issues. We relied on internal economists as well as an outside economist with expertise in FDIUS issues to carry out this evaluation. We also consulted Commerce officials frequently in the conduct of our review to ensure consideration of their views in our findings. In evaluating Commerce’s reports, we considered the following factors: Organizational structure: We considered whether the organizational structure of the reports as a whole and individual chapters (1) facilitated discussion of key FDIUS issues, (2) presented principal findings in a logical, consistent format, and (3) used tables and graphics effectively to highlight the trends in FDIUS and describe the characteristics of U.S. affiliates of foreign-owned firms. Sufficiency of evidence for principal findings: To evaluate the sufficiency of Commerce’s support for its principal findings, we considered whether the reports (1) presented convincing evidence to establish causal relationships, (2) identified limitations in the data available or used, (3) used appropriate analytical techniques to address specific questions, and (4) qualified conclusions where appropriate. Coverage of the data requirements of the 1990 act: We reviewed the reports to determine the extent to which they included discussion of the data requirements of the 1990 act. To the extent of available data, the act requires Commerce to compare business enterprises controlled by foreign persons with other business enterprises in the United States with respect to employment, market share, value added, productivity, research and development, exports, imports, profitability, taxes paid, and investment incentives and services provided by state and local governments, including quasi-government entities. Coverage of specific industry sectors: We assessed the extent to which the reports included discussion of most of the major industry sectors identified in the SIC system at the 2-digit and 3-digit levels, as well as specific industries with higher levels of foreign direct investment and/or those that involve the use or production of advanced technologies. Where appropriate, we also evaluated Commerce’s presentation of the 4-digit SIC data made available through the BEA-Census and BEA-BLS data links. Coverage of major investing countries: We determined the extent to which the reports included coverage of the countries with the highest shares of direct investment in the United States, which included Japan, United Kingdom, Canada, Germany, France, and Switzerland in 1993. Use of relevant outside studies: We evaluated the extent to which the reports included reference to current FDIUS publications by major academic or research institutions and to economists with recognized expertise in FDI issues. To identify and obtain information on significant FDI research and policy developments, we reviewed current literature on FDIUS and attended conferences where researchers presented the results of recent FDIUS studies. In addition, we consulted with outside experts in government and the research communities to obtain their perspectives on the Commerce reports and on our principal findings. To obtain information on federal government FDI data collection activities, we interviewed officials from BEA, Census, and BLS, and obtained documents outlining their data collection processes, as well as current examples of relevant survey questionnaires. We also consulted past GAO and Commerce reports which discussed federal government FDI data collection efforts outside of the Departments of Commerce and Labor. To review the status and processes of the interagency data exchanges required by the 1990 act, we interviewed officials with responsibility for such activities in BEA, Census, and BLS. These officials provided us with detailed verbal and documentary descriptions of the steps required to perform the data exchanges. In addition, in June 1995 we observed a demonstration of the data link reconciliation process at the Census Bureau. To evaluate the extent to which the implementation of the 1990 act has led to improvements in FDIUS data, we considered factors such as the contribution of the BEA-Census and BEA-BLS data exchange programs, the overall quality and coverage of the Commerce Department reports since 1991, and Commerce’s changing approach to fulfilling its reporting requirements under the act. In addition to our usual quality assurance procedures, we requested an outside research economist with expertise in FDIUS issues to review a draft of the report and provide comments. We have incorporated his suggestions where appropriate. We performed our review in Washington, D.C., from January 1995 to August 1995 in accordance with generally accepted government auditing standards. Curtis F. Turnbow, Assistant Director Sara B. Denman, Senior Evaluator Carolyn M. Black-Bagdoyan, Evaluator Jane-yu H. Li, Senior Economist Martin de Alteriis, Social Science Analyst Elizabeth J. Sirois, Adviser Rona Mendelsohn, Evaluator (Communications Analyst) Herbert I. Dunn, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed the Department of Commerce's first three annual reports on foreign direct investment in the United States (FDIUS) and governmental efforts to improve the quality of FDIUS data, focusing on: (1) the extent to which Commerce's reports fulfilled legislative requirements and addressed prior GAO recommendations; (2) how FDIUS data is obtained; (3) the status of data sharing between the Bureau of Economic Analysis (BEA) and the Bureau of Census and between BEA and the Bureau of Labor Statistics (BLS); and (4) the extent to which implementing legislation has improved public information on FDIUS. GAO found that: (1) Commerce's FDIUS reports included all of the applicable data requirements and responded to prior GAO recommendations; (2) the reports' analyses and conclusions relating to FDIUS economic effects were generally thorough and reasonable, but in a few instances, Commerce's conclusions were more definitive than evidence warranted; (3) BEA obtains FDIUS information through survey questionnaires that require U.S. affiliates of foreign firms to report on financial and operating data; (4) BEA has strengthened its survey procedures and increased its staff devoted to survey follow-up in order to ensure compliance with reporting requirements; (5) BEA-Census and BEA-BLS data sharing efforts have generated data on U.S. affiliates of foreign firms at a greater level than was previously available, allowing Commerce to draw more meaningful conclusions in its reports; (6) certain restrictions and factors related to the protection of confidential data continue to limit more extensive data sharing among federal agencies; and (7) Commerce has fulfilled the legislative requirements by improving the quantity and quality of FDIUS data, resulting in both government officials and private sector analysts gaining access to previously unavailable FDIUS data. |
The objective of the Clean Water Act is to restore and maintain the chemical, physical, and biological integrity of the nation’s waters. The Congress established a series of national goals and policies to achieve this objective, including what is referred to as “fishable/swimmable” water quality. That is, whenever attainable, the quality of the water should be such that it provides for the protection of fish, shellfish, wildlife, and recreation in and on the water. To help meet national water quality goals, the act established the NPDES program, which limits the discharge of pollutants through two basic approaches—one based on technology and the other on water quality. Under the technology-based approach, facilities must stay within the discharge limits attainable under current technologies for treating water pollution. EPA has issued national minimum technology requirements for municipal facilities and 50 categories of industrial dischargers. The states’ and EPA’s permitting authorities use these requirements to establish discharge limits for specific pollutants. In contrast, under the water-quality-based approach, facilities must meet discharge limits derived from states’ water quality standards, which generally consist of (1) “designated uses” for the water bodies (e.g., propagation of fish and wildlife, drinking water, and recreation) and (2) narrative or numeric criteria to protect the designated uses. Narrative criteria are generally statements that describe the desired water quality goal, such as “no toxics in toxic amounts.” Numeric criteria for specific pollutants are generally expressed as concentration levels and target certain toxic pollutants that EPA has designated as “priority pollutants.” In addition to adopting water quality standards, the states may also establish policies concerning certain technical factors that affect the implementation of the standards in the discharge permits. For example, many states have adopted policies for (1) establishing mixing zones (limited areas where discharges mix with receiving waters and where the numeric criteria can be exceeded), (2) determining the amount of available dilution (the ratio of the low flow of the receiving waters to the flow of the discharge), and (3) considering background concentration (the levels of pollutants already present in the receiving waters). When the states’ and EPA’s permitting authorities are deciding how extensively the pollutants should be controlled in a facility’s permit, they first look to the technology-based standards. If the discharge limits derived by applying these standards are not low enough to protect the designated uses of the applicable water body, the permitting authorities turn to the state’s water quality standards to develop more stringent limits. To achieve the tighter limits, a facility may have to install more advanced treatment technology or take measures to reduce the amounts of pollutants needing treatment. For additional information on the role of EPA’s headquarters and regional offices and the state agencies in establishing standards and implementing them in permits, see appendix I. As agreed with your office, this report focuses on the water-quality-based approach to controlling pollution and the way the states’ and EPA’s permitting authorities are implementing water quality standards in the NPDES permits issued to “major” facilities. As of July 1995, approximately 59,000 municipal and industrial facilities nationwide had received permits under the NPDES program, and about 6,800 of these permits were for major facilities, including about 4,000 municipal facilities and 2,800 industrial dischargers. Our review of the data on municipal permits for five commonly discharged toxic pollutants disclosed that decisions about whether and how to control pollutants differed both from state to state and within states. In some instances, differences in the limits themselves, or in the standards and policies used to derive the limits, have led to concerns between neighboring states. Using EPA’s Permits Compliance System database, we extracted data on the 1,407 permits issued to municipal wastewater treatment facilities nationwide between February 5, 1993, and March 21, 1995, to determine what types of controls, if any, the states’ and EPA’s permitting authorities had imposed in these facilities’ discharge permits for five toxic metal pollutants—cadmium, copper, lead, mercury, and zinc. We found that when the permitting authorities decided that some type of control was warranted, some consistently established numeric discharge limits in their permits, and others imposed monitoring requirements in all or almost all cases. For example, North Carolina issued 93 permits during our review period and, whenever it determined that a pollutant warranted controls, it always established numeric discharge limits rather than impose monitoring requirements. Other states, such as New York and West Virginia, also consistently established numeric limits for controlling the five pollutants we examined. In contrast, New Jersey issued 44 permits during our review period and, except for 1 permit that contained a limit for copper, the state always imposed monitoring requirements instead of discharge limits when the state determined that controls were warranted. Oregon, among other states, made similar decisions, as did EPA’s Region VI when it wrote permits for Louisiana, a state not authorized to issue NPDES permits. We also found that some states, such as Vermont and Arkansas, had not imposed discharge limits or monitoring requirements in the following instances in which EPA’s regional officials said that such controls may be warranted. In Vermont, none of the discharge permits for major municipal facilities contained discharge limits or monitoring requirements for the five metals. However, at our request, the cognizant EPA regional staff (in Region I) reviewed 4 of the 15 municipal permits issued by Vermont and determined that for 2 of the facilities, limits or monitoring requirements would probably be appropriate. Vermont officials agreed to review the permits and consider additional requirements. Arkansas, with one exception, had not imposed either limits or monitoring requirements in its municipal permits for the toxic metals we examined. State officials are allowing these facilities to continue operating under “old” permits rather than reissuing them. The officials told us that if the permits were to be formally reopened, the state would be obligated to apply EPA-imposed water quality standards for the metals. Arkansas officials believe that these standards are too stringent and that the facilities would engage the state in a costly appeal process if limits were imposed. Officials from the cognizant EPA regional office (Region VI) said that the Arkansas permits should contain discharge limits but that EPA does not have the authority to impose such limits in a state authorized to issue permits when the state simply declines to reissue them. EPA’s only recourse would be to take back responsibility for the program—an unrealistic option. For facilities, both monitoring requirements and discharge limits can be costly to implement. According to officials in EPA’s Permits Division, the costs of monitoring depend on the frequency of required sampling and on the types of pollutants that must be tested. The costs of installing advanced treatment equipment to meet discharge limits are usually much higher. These officials also said that because of these differences in cost, the facilities that are subject to monitoring requirements generally enjoy an economic advantage over the facilities that must meet discharge limits, all other things being equal. Furthermore, the facilities that are subject to neither type of control enjoy an economic advantage over the facilities that must comply with limits or monitoring requirements. Overall, our analysis disclosed that for each of the five pollutants, about 30 percent of the permits contained limits or monitoring requirements, while about 70 percent contained neither type of control. According to EPA’s permitting regulations and guidance, there can be legitimate reasons for imposing no controls over some pollutants: Generally, either the facilities are not discharging the pollutants or their discharges are deemed too low to interfere with the designated uses of the applicable water bodies. See appendix II for a summary of the control decisions across the nation for the five toxic pollutants included in our analysis and for additional discussion of the reasons for not imposing limits or monitoring requirements on some pollutant discharges. Appendix III presents a state-by-state breakdown of the 1,407 permits included in our analysis. EPA and the states agree that differences in the numeric limits for specific pollutants can and do exist—not only from state to state, but from water body to water body. To illustrate these differences, we extracted data on numeric limits as part of our analysis of EPA’s data on municipal permits. Specifically, from the 1,407 permits for municipal facilities issued nationwide between February 5, 1993, and March 21, 1995, we identified those facilities discharging into freshwater (1) whose permits contained discharge limits for one or more of the five toxic metals and (2) whose plant capacity, or design flow, was included in EPA’s database. For each of the five pollutants, we found significant differences in the amounts that facilities were allowed to discharge across the nation—even for facilities of similar capacity. In the case of zinc, both the highest and the lowest limits were established in the same state. Figure 1 shows the results of our analysis. As figure 1 indicates, differences in the numeric limits for the same pollutant can be significant—in the case of mercury, about 775 times greater at one facility than at another facility of similar capacity. We discuss the causes of the differences in discharge limits later in this report. Variations in the discharge limits, or in the standards and procedures used to derive these limits, have been a source of concern, particularly when neighboring jurisdictions share water bodies and the differences are readily apparent to the permitting authorities and discharging facilities, as the following examples illustrate: In 1995, an industrial facility in Pennsylvania challenged a discharge limit for arsenic because Pennsylvania’s numeric criterion was 2,500 times more stringent than that used by the neighboring state of New York, into which the discharge flowed. Among other things, the discharger argued that having to comply with the more stringent criterion created an economic disadvantage for the company. Eventually, Pennsylvania agreed to reissue the permit with a monitoring requirement for arsenic instead of a discharge limit. The state has also revised its water quality standards using the less stringent criterion. Oklahoma challenged the 1985 permit that EPA issued to an Arkansas municipal wastewater treatment facility that discharges into a tributary of the Illinois River. One of the key issues in the case was Oklahoma’s contention that the facility’s permit, which was based on Arkansas’s water quality standards, contained limits that would violate Oklahoma’s water quality standards when the facility’s discharge moved downstream. As a result, Oklahoma officials maintained, the river would not achieve its designation as “outstanding natural resource water,” a special classification designed to protect high-quality waters. Although EPA has the authority to ensure that discharges in the states located upstream do not violate the water quality standards in the states located downstream, the agency determined that this case did not warrant such action, in part because the discharge allowed under the permit would not produce a detectable violation of Oklahoma’s standards. In 1992, the Supreme Court ruled that EPA’s issuance of the Arkansas permit was reasonable. Concerns among states about differences in water quality standards and the policies that affect their implementation may become more common in the future. According to a recent analysis by the U.S. Geological Survey,many states receive more than half of their water pollution from neighboring states. While much of this pollution may be attributed to diffuse—or “nonpoint”—sources, such as agricultural runoff, according to an official from the U.S. Geological Survey, the discharges from municipal and industrial facilities allowed under permits also contribute to interstate pollution. Both the act and EPA’s regulations give the states and EPA considerable flexibility in implementing the NPDES program. The permitting authorities differ considerably in how they assess the likelihood that states’ water quality standards will be exceeded, as well as in how they decide what controls are warranted. If they decide that discharge limits are warranted, these limits can differ widely because of differences in the (1) states’ water quality standards and (2) implementation policies that come into play when the permitting authorities “translate” general water quality standards into limits for specific facilities in specific locations. We found differences in how the permitting authorities determine that a pollutant has the “reasonable potential” to violate a state’s water quality standard and prevent the designated use of a water body from being achieved. In EPA’s Region I, for example, the permitting officials believe that one or two samples indicating the potential for a violation may suffice to justify imposing a discharge limit. In contrast, given the same evidence, officials in EPA’s Region VI generally impose requirements for monitoring in order to collect data over a longer period of time—up to the 5-year life of the permit. Officials in the Permits Division at EPA headquarters agreed that there are differences in how the states’ and EPA’s permitting authorities decide whether and how to impose controls over pollutant discharges. The officials said that a key element in these differences is the amount and type of data the authorities require to determine reasonable potential; some permitting authorities are comfortable with establishing discharge limits on the basis of limited information, while others want to collect more data and impose monitoring requirements. To assist the states and EPA’s regional offices, EPA has issued national guidance, including a suggested methodology and other options for determining reasonable potential. However, Permits Division officials emphasized that the law and applicable regulations provide for flexibility in decisions on reasonable potential and other aspects of the NPDES program. The states have exercised the flexibility available within the Clean Water Act and EPA’s regulations to (1) adopt different water quality standards and (2) apply different policies in implementing these standards in permits. As a result of these differences, discharge limits can vary significantly even, as illustrated earlier, for facilities of similar capacity. In the case of states’ water quality standards, the designated use assigned to a particular water body can affect how stringent a facility’s discharge limit will be. For example, if a facility is discharging into a water body designated for recreational use, the discharge limits are likely to be less stringent than they would be if the water body were designated for use as a drinking water supply. Water quality standards also differ in terms of the numeric criteria the states adopt to ensure that the designated uses of the water will be achieved or maintained. EPA has provided guidance to the states on developing these criteria. Some states have adopted EPA’s numeric criteria (e.g., a human health criterion for mercury that allows for no more than 0.144 micrograms per liter) as their own, and others have developed different criteria that reflect regional conditions and concerns. For example, Texas modified EPA’s criteria to account for higher rates of fish consumption in the state. Another significant source of differences in the states’ water quality standards is the cancer risk level that is selected for carcinogenic pollutants. For example, Connecticut typically bases its numeric criteria for these pollutants on a risk level of 1 excess cancer case per 1 million people, while Arkansas bases its criteria on a risk level of 1 excess cancer case per 100,000 people. Thus, Connecticut’s criteria are 10 times more stringent than Arkansas’s. Many states have established implementation policies that can significantly affect the application of water quality standards in establishing the discharge limits for individual facilities. These policies address technical factors such as mixing zones, dilution, and background concentration. The states differ in their policies for mixing zones—limited areas where the facilities’ discharges mix with the receiving waters and numeric criteria can be exceeded. The states’ policies can influence the stringency of the discharge limits by restricting where such zones are allowed and/or by defining their size and shape. In Texas, for instance, the size of mixing zones in streams is typically limited to an area 100 feet upstream and 300 feet downstream from the discharge point; other states apply different standards or do not allow mixing zones in some types of water bodies. In general, the discharge limits will be less stringent for a facility located in a state that allows mixing zones than for a facility in a state that requires facilities to meet numeric criteria at the end of the discharge pipe. The states’ policies on dilution—the ratio of the low flow of the receiving waters to the flow of the discharge—can also influence the stringency of the discharge limits. In general, the larger the volume of the receiving waters available to dilute, or reduce the concentration of, the pollutants being discharged, the less stringent the discharge limit. Thus, all other things being equal, the discharge limit for a facility located on the Mississippi River will be less stringent than the limit for a similar facility located on a smaller river. The states also use different assumptions in computing the flow of a facility’s discharge (e.g., the highest monthly average during the preceding 2 years or the highest 30-day average expected during the life of the permit) and the low flow of the receiving waters (e.g., the lowest average flow during 7 consecutive days within the past 10 years or the lowest 1-day flow that occurs within 3 years). The states also have different policies on background concentration—the level of pollutants already present in the receiving waters as a result of naturally occurring pollutants, permitted discharges from upstream, spills, unregulated discharges, or some combination of these sources. In general, the higher the level of the background concentration, the more stringent the discharge limit will be because the extent of the existing pollution affects the amounts that facilities may discharge without violating the water quality standards. Connecticut, for example, assumes background concentrations of zero in deriving limits, while Colorado uses actual data. All other things being equal, the discharge limits established by Connecticut will be less stringent than those set by Colorado whenever the actual background concentration is greater than zero. EPA, through its regional offices, periodically reviews the states’ water quality standards; if it determines that the standards are inconsistent with the requirements of the Clean Water Act—because, for example, the standards do not adequately protect the designated uses of the water or are not scientifically defensible—it disapproves them. However, EPA does not consistently review policies that could significantly affect the implementation of the standards in permits, either when a state submits its standards for approval or when an EPA regional office reviews individual permits before they are issued. As a result of an apparent inconsistency in EPA’s regulations, some states are not including the relevant implementation policies when they submit their water quality standards to EPA for review and approval. According to the regulations, the states must submit to EPA for review information on the designated uses of their waters and the numeric or narrative criteria for specific pollutants as well as “information on general policies” that may affect the application or implementation of the standards. However, EPA’s regulations also provide that the states may exercise discretion over what general policies they include in their standards. In EPA’s regions I and VI, for example, program officials believe that (1) the states are under no obligation to submit their implementation policies, such as their policies on considering background concentration, for EPA’s review and (2) EPA cannot require the states to do so. Officials at EPA’s headquarters and regional offices acknowledge that there is some confusion about what information the states must submit for review. EPA officials maintain that even if the agency has not reviewed the states’ implementation policies in the course of approving the standards, it can control the use of these policies when EPA’s regional offices review individual permits and have the opportunity to disapprove those permits that do not adequately protect water quality. However, on average EPA’s regional offices review only about 10 percent of the permits issued to major facilities by the 40 states authorized to issue permits. Moreover, EPA is considering a new initiative that will eliminate reviews of permits before issuance and will instead provide for postissuance reviews of a sample of permits. According to the Acting Director of EPA’s Permits Division, such reviews are a better use of EPA’s resources because they require less staff time and EPA’s reviewers will not be pressured to meet deadlines for public comment. However, he said that, as a general rule, EPA will not reopen permits. Thus, identified problems may not be addressed until the permits come up for renewal, usually every 5 years. If EPA becomes aware of a significant problem, the regional office will work with the applicable state to attempt to remedy the situation. Because EPA relies on its regional offices to oversee the states’ implementation policies, it does not maintain national information on these policies. Moreover, except for some efforts by its regional offices, EPA has not assessed the impact of the differences among the states. EPA headquarters officials told us that although such an assessment might be useful, they have no plans to conduct one, in part because they do not have the resources or a specific legislative requirement to do so. In some instances, EPA’s regional offices have tried to identify and resolve differences in the states’ implementation policies because they have been concerned about the extent of these differences. However, some states have resisted these initiatives on the basis that they should not be required to comply with policies that are not required nationwide. EPA is considering regulatory changes that could enhance the agency’s ability to monitor the states’ implementation policies. According to a March 1995 draft of an advance notice of proposed rulemaking, EPA plans to solicit comments on, among other things, the kind of information on implementation policies that the states should be required to submit for EPA’s approval. In the case of mixing zones, for example, EPA is seeking comments on whether the states should be required to describe their methods for determining the location, size, shape, and other characteristics of the mixing zones that they will allow. The Chief of EPA’s Water Quality Standards Branch told us that although other priorities could postpone the rulemaking, EPA has not revised the applicable regulations since 1983, and some changes are therefore needed. While potential regulatory changes are as yet undefined, the Office of Water has embarked on a strategy for watershed management that could, by itself, achieve greater consistency among the states’ NPDES programs, including the standards and policies the states use to derive the discharge limits for the facilities within the same watershed. Watershed management means identifying all sources of pollution and integrating controls on pollution within hydrologically defined drainage basins, known as watersheds. Under this approach, all of the stakeholders in a watershed’s area—including federal, state, and local regulatory authorities; municipal and industrial dischargers; other potential sources of pollution; and interested citizens—agree on how best to restore and maintain water quality within the watershed. In March 1994, the Permits Division of EPA’s Office of Water published its NPDES Watershed Strategy to describe the division’s plans for incorporating the NPDES program’s functions into the broader watershed management approach. Although the strategy does not specifically discuss interstate watersheds, EPA officials believe that the states will identify such areas and, where reasonable, coordinate the issuance of NPDES permits. EPA officials believe that as a practical matter, the watershed management approach will cause the states to resolve differences in their standards and implementation policies as they attempt to issue NPDES permits consistently in shared water bodies and watersheds. We provided copies of a draft of this report to EPA for its review and comment, and on December 15, 1995, EPA provided us with comments from its Acting Director, Permits Division, Office of Water. In addition to some technical and editorial suggestions, which we incorporated as appropriate, EPA had the following two comments. According to EPA, the results-in-brief section of the draft drew too stark a picture of the limitations of EPA’s reviews of the states’ programs. EPA said that its regional offices do review the states’ standards and implementation policies and that they do consider the impact of variations among the states in their reviews. Nevertheless, EPA said that its reviews of the states’ implementation policies could be more exhaustive and that more could be done to help ensure appropriate levels of consistency among the states, assuming adequate resources. We revised that section of the report to better recognize the extent of EPA’s reviews of the states’ standards and implementation policies, and to better pinpoint the limitations of these reviews. EPA also said that the results-in-brief section of the draft could leave the impression that the only reason for differences among the states is that the Clean Water Act provides for flexibility, when inherent differences in surface waters across the country could themselves result in different standards and water-quality-based permitting requirements among the states. We revised that section of the report to recognize this reason for differences. We performed most of our work at the Permits Division and the Water Quality Standards Branch, Office of Wastewater Management, EPA headquarters; EPA Region I in Boston, Massachusetts, Region VI in Dallas, Texas, and Region VIII in Denver, Colorado; and state NPDES program offices in Arkansas, Colorado, Connecticut, Massachusetts, Texas, and Utah. We conducted our review from July 1994 through December 1995 in accordance with generally accepted government auditing standards. For a more detailed description of our scope and methodology, see appendix IV. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 10 days after the date of this letter. At that time, we will send copies to the Administrator, EPA; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others on request. Please call me on (202) 512-6112 if you or your staff have any questions. Major contributors to this report are listed in appendix V. Figure I.1 illustrates the roles and responsibilities of the Environmental Protection Agency (EPA) and the state agencies in developing water quality standards and implementing them in the permits issued to municipal and industrial wastewater treatment facilities under the National Pollutant Discharge Elimination System program (NPDES). Office of Science and Technology (National guidance/standards) (Permit policies and Implementation) Engineering and Analysis Division issues national technology standards for municipal and industrial facilities. Health and Ecological Criteria Division issues chemical-specific numeric water quality criteria as guidance for states to use in adopting water quality standards. Standards and Applied Sciences Division issues regulations and guidance for states to use in implementing their water quality standards. Permits Division issues national guidance and regulations for states and EPA regions to use in issuing NPDES permits. Approve states' water quality standards. Oversee states' NPDES programs, including reviewing state-issued NPDES permits for compliance with the appropriate standards. Issue permits for states that do not have authorized NPDES programs. Adopt water quality standards, including designated uses and numeric criteria for specific pollutants. Issue permits to municipal and industrial dischargers. EPA issues guidance on water quality criteria for specific pollutants that the states may use in developing numeric criteria for their water quality standards. States may also use other data to develop their numeric criteria as long as these criteria are scientifically defensible. The states’ water quality standards—and any policies that affect the implementation of these standards—are subject to EPA’s approval. In determining whether water-quality-based controls are warranted, the states’ and EPA’s permitting authorities (1) analyze a facility’s wastewater to identify the type and amount of pollutants being discharged and (2) determine whether these levels of pollutants will cause, have a “reasonable potential” to cause, or will contribute to causing the facility’s discharge to exceed the state’s water quality criteria. This assessment has one of three possible effects on a facility’s permit: It may result in (1) a discharge limit, if the amount of pollutants being discharged violates, is likely to violate, or will contribute to violating the criteria that protect the receiving waters; (2) a requirement for monitoring to gather additional data in order to determine whether a limit is warranted; or (3) neither a limit nor a monitoring requirement, if the amount of pollutants being discharged will not violate, is unlikely to violate, or will not contribute to violating the criteria that protect the receiving waters. For each of the five toxic metal pollutants included in our analysis, figure II.1 shows the number of permits that contained discharge limits, the number that contained monitoring requirements, and the number that contained neither type of control. Municipal wastewater treatment facilities receive wastewater from several sources, including industry, commercial businesses, and households. This wastewater is likely to include toxic pollutants, primarily from industrial sources whose waste must be pretreated to reduce or eliminate such pollutants before it enters the municipal treatment facilities. According to officials in EPA’s Permits Division, a major reason for the lack of discharge limits and monitoring requirements is the existence of effective pretreatment programs. These officials believe that because such programs play an important role in reducing the level of toxic pollutants entering municipal treatment facilities, the lack of controls disclosed in our analysis is not surprising. They consider our findings to be an indication that the pretreatment programs are working as intended. However, other factors suggest that additional controls may be warranted. First, officials from EPA’s Permits Division acknowledge that in some cases, the permitting authorities have been slow to impose controls in municipal permits on the discharges of toxic metals or to adopt numeric criteria for such metals in their water quality standards. In addition, the pretreatment programs primarily focus on industrial customers and, as we reported in 1991, nonindustrial wastes from both commercial and residential sources can be a significant source of toxic pollutants entering municipal wastewater treatment facilities. According to the most comprehensive study cited in the report (a 1979 EPA survey of municipal treatment facilities in four major cities), nonindustrial sources contribute nearly 70 percent of the copper and over 30 percent of the lead, mercury, and zinc entering the municipal facilities. Our report also cited other, more recent studies that identified significant contributions of toxic metals from nonindustrial sources. The following table, based on data extracted from EPA’s Permits Compliance System database, shows the types of controls, if any, imposed by the states’ and EPA’s permitting authorities for the five toxic metals in all of the permits issued to major municipal wastewater treatment facilities from February 5, 1993 through March 21, 1995. For each of the five pollutants, the table lists (1) “Limits”—the number of permits that contained discharge limits for the selected pollutants, (2) “Monitor”—the number of permits that required facilities to monitor the level of pollutants in their discharge, and (3) “None”—the number of permits that contained no controls. To obtain nationwide information on variations in whether and how pollutants are controlled in discharge permits, we extracted information from EPA’s Permits Compliance System database on the 1,407 permits issued to major municipal wastewater treatment facilities from February 5, 1993, through March 21, 1995. We analyzed these data to determine the type of controls, if any, on five toxic metal pollutants typically discharged by municipal facilities (cadmium, copper, lead, mercury, and zinc). For the permits that contained discharge limits for the five selected pollutants, we obtained those limits to determine the range for each pollutant at facilities of similar capacity. For the permits that contained the highest and lowest limits, we verified the information in EPA’s database with the applicable EPA regional office. As agreed with the requester’s office, we did not attempt to determine the appropriateness of the differences in discharge limits because such an assessment would have been too complex and time-consuming. We confined our analysis of variations in the discharge limits to municipal facilities because EPA’s Permits Compliance System database does not contain information that distinguishes between technology-based and water-quality-based discharge limits for industrial facilities. However, because EPA has not issued any technology-based standards for toxic pollutants that are applicable to municipal facilities, the discharge limits for such pollutants were derived from water-quality-based standards. According to EPA, water-quality-based controls are considered for virtually all major facilities, and an estimated 30 percent of the permits for these facilities nationwide actually contain limits based on water quality. To obtain information on the causes of the variations in the states’ NPDES permits, we interviewed the EPA officials responsible for the NPDES and Water Quality Standards programs at the agency’s headquarters in Washington, D.C., and regional offices in Boston, Massachusetts (Region I); Philadelphia, Pennsylvania (Region III); Chicago, Illinois (Region V); Dallas, Texas (Region VI); and Denver, Colorado (Region VIII). We reviewed the applicable provisions of the Clean Water Act, EPA’s regulations, and guidance on NPDES permits and water quality standards. We also interviewed state officials in Arkansas, Colorado, Connecticut, Massachusetts, Pennsylvania, Texas, and Utah and reviewed documents on these states’ water quality standards, implementation policies, and NPDES permitting activities. To obtain information on EPA’s role in monitoring the policies and procedures that the states use in deriving discharge limits, we reviewed applicable regulations and guidance, including EPA’s preliminary draft of an advance notice of proposed rulemaking on potential revisions to the agency’s regulation on water quality standards and EPA’s NPDES Watershed Strategy. We also discussed EPA’s oversight authority with officials from EPA’s Permits Division, Water Quality Standards Branch, Office of General Counsel, and selected regional offices. In addition, we discussed oversight issues with selected states, environmental groups, and municipal and industrial associations. We also obtained limited information from EPA’s Office of Wetlands, Oceans, and Watersheds and Office of General Counsel; additional states and EPA’s regional offices; the U.S. Geological Survey; the U.S. Fish and Wildlife Service; environmental groups, including the National Wildlife Federation and the Environmental Defense Fund; and various associations representing state regulators and municipal and industrial dischargers. Ellen Crocker, Core Group Manager Maureen Driscoll, Evaluator-in-Charge Les Mahagan, Senior Evaluator Linda Choy, Senior Program Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the Environmental Protection Agency's (EPA) authority to permit municipal wastewater treatment facilities to discharge pollutants into surface waters, focusing on: (1) differences in how EPA and the states control discharges of specific pollutants; and (2) EPA oversight of state water quality standards and policies. GAO found that: (1) controls over the discharge of pollutants in surface waters vary by state; (2) differences in state water pollutant controls are a concern to neighboring states that share water bodies; (3) differences in state controls exist because surface waters differ greatly throughout the country and EPA regulations allow for flexibility in the way states assess and control water pollution; (4) EPA has limited oversight of state water quality standards, since it does not maintain sufficient information on state implementation policies or assess the impact of variations among states and it reviews relatively few permits; and (5) EPA plans to enhance its reviews of state implementation policies and increase its emphasis on controlling pollution within watersheds. |
The safe travel of U.S. airline passengers is a joint responsibility of FAA and the airlines in accordance with the Federal Aviation Act of 1958, as amended, and the Department of Transportation Act, as amended. To carry out its responsibilities under these acts, FAA supports research and development; certifies that new technologies and procedures are safe; undertakes rule-makings, which when finalized form the basis of federal aviation regulations; issues other guidance, such as Advisory Circulars; and oversees the industry’s compliance with standards that aircraft manufacturers and airlines must meet to build and operate commercial aircraft. Aircraft manufacturers are responsible for designing aircraft that meet FAA’s safety standards, and air carriers are responsible for operating and maintaining their aircraft in accordance with the standards for safety and maintenance established in FAA’s regulations. FAA, in turn, certifies aircraft designs and monitors the industry’s compliance with the regulations. FAA’s general process for issuing a regulation, or rule, includes several steps. When the regulation would require the implementation of a technology or operation, FAA first certifies that the technology or operation is safe. Then, FAA publishes a notice of proposed rule-making in the Federal Register, which sets forth the terms of the rule and establishes a period for the public to comment on it. Next, FAA reviews the comments by incorporating changes into the rule that it believes are warranted, and, in some instances, it repeats these steps one or more times. Finally, FAA publishes a final rule in the Federal Register. The final rule includes the date when it will go into effect and a time line for compliance. Within FAA, the Aircraft Certification Service is responsible for certifying that technologies are safe, including improvements to cabin occupant safety and health, generally through the issuance of new regulations, a finding certifying an equivalent level of safety, or a special condition when no rule covers the new technology. The Certification Service is also responsible for taking enforcement action to ensure the continued safety of aircraft by prescribing standards for aircraft manufacturers governing the design, production, and airworthiness of aeronautical products, such as cabin interiors. The Flight Standards Service is primarily responsible for certifying an airline’s operations (assessing the airline’s ability to carry out its operations and maintain the airworthiness of the aircraft) and for monitoring the operations and maintenance of the airline’s fleet. FAA conducts research on cabin occupant safety and health issues in two research facilities, the Mike Monroney Aeronautical Center/Civil Aerospace Medical Institute in Oklahoma City, Oklahoma, and the William J. Hughes Technical Center in Atlantic City, New Jersey. The institute focuses on the impact of flight operations on human health, while the technical center focuses on improvements in aircraft design, operation, and maintenance and inspection to prevent accidents and improve survivability. For the institute or the technical center to conduct research on a project, an internal FAA requester must sponsor the project. For example, FAA’s Office of Regulation and Certification sponsors much of the two facilities’ work in support of FAA’s rule-making activities. FAA also cooperates on cabin safety research with the National Aeronautics and Space Administration (NASA), academic institutions, and private research organizations. Until recently, NASA conducted research on airplane crashworthiness at its Langley Research Center in Hampton, Virginia. However, because of internal budget reallocations and a decision to devote more of its funds to aviation security, NASA terminated the Langley Center’s research on the crashworthiness of commercial aircraft in 2002. NASA continues to conduct fire-related research on cabin safety issues at its Glenn Research Center in Cleveland, Ohio. NTSB has the authority to investigate civil aviation accidents and collects data on the causes of injuries and death for the victims of commercial airliner accidents. According to NTSB, the majority of fatalities in commercial airliner accidents are attributable to crash impact forces and the effects of fire and smoke. Specifically, 306 (66 percent) of the 465 fatalities in partially survivable U.S. aviation accidents from 1983 through 2000 died from impact forces, 131 (28 percent) died from fire and smoke, and 28 (6 percent) died from other causes. Surviving an airplane crash depends on a number of factors. The space surrounding a passenger must remain large enough to prevent the passenger from being crushed. The force of impact must also be reduced to levels that the passenger can withstand, either by spreading the impact over a larger part of the body or by increasing the duration of the impact through an energy-absorbing seat or fuselage. The passenger must be restrained in a seat to avoid striking the interior of the airplane, and the seat must not become detached from the floor. Objects within the airplane, such as debris, overhead luggage bins, luggage, and galley equipment, must not strike the passenger. A fire in the cabin must be prevented, or, if one does start, it must burn slowly enough and produce low enough levels of toxic gases to allow the passenger to escape from the airplane. If there is a fire, the passenger must not have sustained injuries that prevent him or her from escaping quickly. Finally, if the passenger escapes serious injury from impact and fire, he or she must have access to exit doors and slides or other means of evacuation. Over the past several decades, FAA has taken a number of regulatory actions designed to improve the safety and health of airline passengers and flight attendants by (1) minimizing injuries from the impact of a crash, (2) preventing fire or mitigating its effects, (3) improving the chances and speed of evacuation, or (4) improving the safety and health of cabin occupants. (See app. III for more information on the regulatory actions FAA has taken to improve cabin occupant safety and health.) Specifically, we identified 18 completed regulatory actions that FAA has taken since 1984. In addition to these past actions, FAA and others in the aviation community are pursuing advancements in these four areas to improve cabin occupant safety and health in the future. We identified and reviewed 28 such advancements—5 to reduce the impact of a crash on occupants, 8 to prevent or mitigate fire and its effects, 10 to facilitate evacuation from aircraft, and 5 to address general cabin occupant safety and health issues. Minimizing Injuries from the The primary cause of injury and death for cabin occupants in an airliner Impact of a Crash accident is the impact of the crash itself. We identified two key regulatory actions that FAA has taken to better protect passengers from impact forces. For example, in 1988, FAA required stronger passenger seats for newly manufactured commercial airplanes to improve protection in survivable crashes. These new seats are capable, for example, of withstanding an impact force that is approximately 16 times a passenger’s body weight (16g), rather than 9 times (9g), and must be tested dynamically (in multiple directions to simulate crash conditions), rather than statically (e.g., drop testing to assess the damage from the force of the weight alone without motion). In addition, in 1992, FAA issued a requirement for corrective action (airworthiness directive) for designs found not to meet the existing rules for overhead storage bins on certain Boeing aircraft, to improve their crashworthiness after bin failures were observed in the 1989 crash of an airliner in Kegworth, England, and a 1991 crash near Stockholm, Sweden. We also identified five key advancements that are being pursued to provide cabin occupants with greater impact protection in the future. These advancements are either under development or currently available. Examples include the following: Lap seat belts with inflatable air bags: Lap seat belts that contain inflatable air bags have been developed by private companies and are currently available to provide passengers with added protection during a crash. About 1,000 of these lap seat belts have been installed on commercial airplanes, primarily in the seats facing wall dividers (bulkheads) to prevent passengers from sustaining head injuries during a crash. (See fig. 1.) Improved seating systems: Seat safety depends on several interrelated systems operating properly, and, therefore, an airline seat is most accurately discussed as a system. New seating system designs are being developed by manufacturers to incorporate new safety and aesthetic designs as well as meet FAA’s 16g seat regulations to better protect passengers from impact forces. These seating systems would help to ensure that the seats themselves perform as expected (i.e., they stay attached to the floor tracks); the space between the seats remains adequate in a crash; and the equipment in the seating area, such as phones and video screens, does not increase the impact hazard. Child safety seats: Child safety seats could provide small children with additional protection in the event of an airliner crash. NTSB and others have recommended their use, and FAA has been involved in this issue for at least 15 years. While it has used its rule-making process to consider requiring their use, FAA decided not to require child safety restraints because its analysis found that if passengers were required to pay full fare for children under the age of 2, some parents would choose to travel by automobile and, statistically, the chances would increase that both the children and the adults would be killed. FAA is continuing to consider a child safety seat requirement. Appendix IV contains additional information on the impact advancements we have identified. Fire prevention and mitigation efforts have given passengers additional time to evacuate an airliner following a crash or cabin fire. FAA has taken seven key regulatory actions to improve fire detection, eliminate potential fire hazards, prevent the spread of fires, and better extinguish them. For example, to help prevent the spread of fire and give passengers more time to escape, FAA upgraded fire safety standards to require that seat cushions have fire-blocking layers, which resulted in airlines retrofitting 650,000 seats over a 3-year period. The agency also set new low heat/smoke standards for materials used for large interior surfaces (e.g., sidewalls, ceilings, and overhead bins), which FAA officials told us resulted in a significant improvement in postcrash fire survivability. FAA also required smoke detectors to be placed in lavatories and automatic fire extinguishers in lavatory waste receptacles in 1986 and 1987, respectively. In addition, the agency required airlines to retrofit their fleets with fire detection and suppression systems in cargo compartments, which according to FAA, applied to over 3,700 aircraft at a cost to airlines of $300 million. To better extinguish fires when they do start, FAA also required, in 1985, that commercial airliners carry two Halon fire extinguishers in addition to other required extinguishers because of Halon’s superior fire suppression capabilities. We also identified 8 key advancements that are currently available and awaiting implementation or are under development to provide additional fire protection for cabin occupants in the future. Examples include the following: Reduced flammability of insulation materials: To eliminate a potential fire hazard, in May 2000, FAA required that air carriers replace insulation blankets covered with a type of insulation known as metalized Mylar® on specific aircraft by 2005, after it was found that the material had ignited and contributed to the crash of Swiss Air Flight 111. Over 700 aircraft were affected by this requirement. In addition, FAA issued a rule in July 2003 requiring that large commercial airplanes manufactured after September 2, 2005, be equipped with thermal acoustic insulation designed to an upgraded fire test standard that will reduce the incidence and intensity of in-flight fires. In addition, after September 2, 2007, newly manufactured aircraft must be equipped with thermal acoustic materials designed to meet a new standard for burn- through resistance, providing passengers more time to escape during a postcrash fire. Reduced fuel tank flammability: Flammable vapors in aircraft fuel tanks can ignite. However, currently available technology can greatly reduce this hazard by “blanketing” the fuel tank with nonexplosive nitrogen-enriched air to suppress (“inert”) the potential for explosion of the tank. The U.S. military has used this technology on selected aircraft for 20 years, but U.S. commercial airlines have not adopted the technology because of its cost and weight. FAA officials told us that the military’s technology was also unreliable and designed to meet military rather than civilian airplane design requirements. FAA fire safety experts have developed a lighter-weight inerting system for center fuel tanks, which is simpler than the military system and potentially more reliable. Reliability of this technology is a major concern for the aviation industry. According to FAA officials, Boeing and Airbus began flight testing this technology in July 2003 and August 2003, respectively. In addition, the Air Transport Association (ATA) noted that inerting is only one prospective component of an ongoing major program for fuel tank safety, and that it has yet to be justified as feasible and cost-effective. Sensor technology: Sensors are currently being developed to better detect overheated or burning materials. According to FAA and the National Institute of Standards and Technology, many current smoke and fire detectors are not reliable. For example, a recent FAA study reported at least one false alarm per week in cargo compartment fire detection systems. The new detectors are being developed by Airbus and others in private industry to reduce the number of false alarms. In addition, FAA is developing standards that would be used to approve new, reduced false alarm sensors. NASA is also developing new sensors and detectors. Water mist for extinguishing fires: Technology has been under development for over two decades to dispense water mist during a fire to protect passengers from heat and smoke and prevent the spread of fire in the cabin. The most significant development effort has been made by a European public-private consortium, FIREDETEX, with over 5 million euros of European Community funding and a total project cost of over 10 million euros (over 10 million U.S. dollars). The development of this system was prompted, in part, by the need to replace Halon, when it was determined that this main firefighting agent used in fire extinguishers aboard commercial airliners depletes ozone in the atmosphere. Appendix V contains additional information on advancements that address fire prevention and mitigation. Enabling passengers to evacuate more quickly during an emergency has saved lives. Over the past two decades, FAA has completed regulatory action on the following six key requirements to help speed evacuations: Improve access to certain emergency exits, such as those generally smaller exits above the wing, by providing an unobstructed passageway to the exit. Install public address systems that are independently powered and can be used for at least 10 minutes. Help to ensure that passengers in the seats next to emergency exits are physically and mentally able to operate the exit doors and assist other passengers in emergency evacuations. Limit the distance between emergency exits to 60 feet. Install emergency lighting systems that visually identify the emergency escape path and each exit. Install fire-resistant emergency evacuation slides. We also identified 10 advancements that are either currently available but awaiting implementation or require additional research that could lead to improved aircraft evacuation, including the following: Improved passenger safety briefings: Information is available to the airlines on how to develop more appealing safety briefings and safety briefing cards so that passengers would be more likely to pay attention to the briefings and be better prepared to evacuate successfully during an emergency. Research has found that passengers often ignore the oral briefings and do not familiarize themselves with the safety briefing cards. FAA has requested that air carriers explore different ways to present safety information to passengers, but FAA regulates only the content of briefings. The presentation style of safety briefings is left up to air carriers. Over-wing exit doors: Exit doors located over the wings of some commercial airliners have been redesigned to “swing out” and away from the aircraft so that cabin occupants can exit more easily during an emergency. Currently, the over-wing exit doors on most U.S. commercial airliners are “self help” doors and must be lifted and stowed by a passenger, which can impede evacuation. (See fig. 2.) The redesigned doors are now used on new-generation B-737 aircraft operated by one U.S. and most European airlines. FAA does not currently require the use of over-wing exit doors that swing out because the exit doors that are removed manually meet the agency’s safety standards. However, FAA is working with the Europeans to develop common requirements for the use of this type of exit door. Audio attraction signals: The United Kingdom’s Civil Aviation Authority and the manufacturer are testing audio attraction signals to determine their usefulness to passengers in locating exit doors during an evacuation. These signals would be mounted near exits and activated during an emergency. The signals would help the passengers find the nearest exit even if lighting and exit signs were obscured by smoke. Appendix VI contains additional information on advancements to improve aircraft emergency evacuations. Passengers and flight attendants can face a range of safety and health effects while aboard commercial airliners. We identified three key actions taken by FAA to help maintain the safety and health of passengers and the cabin crew during normal flight operations. For example, to prevent passengers from being injured during turbulent conditions, FAA initiated the Turbulence Happens campaign in 2000 to increase public awareness of the importance of wearing seatbelts. The agency has advised the airlines to warn passengers to fasten their seatbelts when turbulence is expected, and the airlines generally advise or require passengers to keep their seat belts fastened while seated to help avoid injuries from unexpected turbulence. FAA has also required the airlines to equip their fleets with emergency medical kits since 1986. In addition, Congress banned smoking on most domestic flights in 1990. We also identified five advancements that are either currently available but awaiting implementation or require additional research that could lead to an improvement in the health of passengers and flight attendants in the future. Automatic external defibrillators: Automatic external defibrillators are currently available for use on some commercial airliners if a passenger or crew member requires resuscitation. In 1998, the Congress directed FAA to assess the need for the defibrillators on commercial airliners. On the basis of its findings, the agency issued a rule requiring that U.S. airlines equip their aircraft with automatic external defibrillators by 2004. According to ATA, most airlines have already done so. Enhanced emergency medical kits: In 1998, the Congress directed FAA to collect data for 1 year on the types of in-flight medical emergencies that occurred to determine if existing medical kits should be upgraded. On the basis of the data collected, FAA issued a rule that required the contents of existing emergency medical kits to be expanded to deal with a broader range of emergencies. U.S. commercial airliners are required to carry these enhanced emergency medical kits by 2004. Most U.S. airlines have already completed this upgrade, according to ATA. Advance warning of turbulence: New airborne weather radar and other technologies are currently being developed and evaluated to improve the detection of turbulence and increase the time available to cabin occupants to avert potential injuries. FAA’s July 2003 draft strategic plan established a performance target of reducing injuries to cabin occupants caused by turbulence. To achieve this objective, FAA plans to continue evaluating new airborne weather radars and other technologies that broadly address weather issues, including turbulence. In addition, the draft strategic plan set a performance target of reducing serious injuries caused by turbulence by 33 percent by fiscal year 2008--using the average for fiscal years 2000 through 2002 of 15 injuries per year as the baseline and reducing this average to no more than 10 per year. Improve awareness of radiation exposure: Flight attendants and passengers who fly frequently can be exposed to higher levels of radiation on a cumulative basis than the general public. High levels of radiation have been linked to an increased risk of cancer and potential harm to fetuses. To help passengers and crew members estimate their past and future radiation exposure levels, FAA developed a computer model, which is publicly available on its Web site http://www.jag.cami.jccbi.gov/cariprofile.asp. However, the extent to which flight attendants and frequent flyers are aware of cosmic radiation’s risks and make use of FAA’s computer model is unclear. Agency officials told us that they plan to install a counter capability on its Civil Aerospace Medical Institute Web site to track the number of visits to its aircrew and passenger health and safety Web site. FAA also plans to issue an Advisory Circular by early next year, which incorporates the findings of a just completed FAA report, “What Aircrews Should Know About Their Occupational Exposure to Ionizing Radiation.” This Advisory Circular will include recommended actions for aircrews and information on solar flare event notification of aircrews. In contrast, airlines in Europe abide by more stringent requirements for helping to ensure that cabin and flight crew members do not receive excessive doses of radiation from performing their flight duties during a given year. For example, in May 1996, the European Union issued a directive for workers, including air carrier crew members (cabin and flight crews) and the general public, on basic safety and health protections against dangers arising from ionizing radiation. This directive set dose limits and required air carriers to (1) assess and monitor the exposure of all crew members to avoid exceeding exposure limits, (2) work with those individuals at risk of high exposure levels to adjust their work or flight schedules to reduce those levels, and (3) inform crew members of the health risks that their work involves from exposure to radiation. It also required airlines to work with female crew members, when they announce a pregnancy, to avoid exposing the fetus to harmful levels of radiation. This directive was binding for all European Union member states and became effective in May 2000. Improved awareness of potential health effects related to flying: Air travel may exacerbate some medical conditions. Of particular concern is a condition known as Deep Vein Thrombosis (DVT), or travelers’ thrombosis, in which blood clots can develop in the deep veins of the legs from extended periods of inactivity. In a small percentage of cases, the clots can break free and travel to the lungs, with potentially fatal results. Although steps can be taken to avoid or mitigate some travel- related health effects, no formal awareness campaigns have been initiated by FAA to help ensure that this information reaches physicians and the traveling public. The Aerospace Medical Association’s Web site http://www.asma.org/publication.html includes guidance for physicians to use in advising passengers with preexisting medical conditions on the potential risks of flying, as well as information for passengers with such conditions to use in assessing their own potential risks. See appendix VII for additional information on health-related advances. The advancements being pursued to improve the safety and health of cabin occupants vary in their readiness for deployment. For example, of the 28 advancements we reviewed, 14 are mature and currently available. Two of these, preparation for in-flight medical emergencies and the use of new insulation, were addressed through regulations. These regulations require airlines to install additional emergency medical equipment (automatic external defibrillators and enhanced emergency medical kits) by 2004, replace flammable insulation covering (metalized Mylar®) on specific aircraft by 2005, and manufacture new large commercial airliners that use a new type of insulation meeting more stringent flammability test standards after September 2, 2005. Another advancement is currently in the rule- making process—retrofitting the existing fleet with stronger 16g seats. The remaining 11 advancements are available, but are not required by FAA. For example, some airlines have elected to use inflatable lap seat belts and exit doors over the wings that swing out instead of requiring manual removal, and others are using photo-luminescent floor lighting in lieu of or in combination with traditional electrical lighting. Some of these advancements are commercially available to the flying public, including smoke hoods and child safety seats certified for use on commercial airliners. The remaining 14 advancements are in various stages of research, engineering, and development in the United States, Canada, or Europe. Several factors have slowed the implementation of airliner cabin occupant safety and health advancements in the United States. When advancements are available for commercial use but not yet implemented or installed, their use may be slowed by the time it takes (1) for FAA to complete the rule- making process, which may be required for an advancement to be approved for use but may take many years; (2) for U.S. and foreign aviation authorities to resolve differences between their respective cabin occupant safety and health requirements; and (3) for the airlines to adopt or install advancements after FAA has approved their use, including the time required to schedule an advancement’s installation to coincide with major maintenance cycles and thereby minimize the costs associated with taking an airplane out of service. When advancements are not ready for commercial use because they need further research to develop their technologies or reduce their costs, their implementation may be slowed by FAA’s multistep process for identifying advancements and allocating its limited resources to research on potential advancements. FAA’s multistep process is hampered by a lack of autopsy and survivor information from past accidents and by not having cost and effectiveness data as part of the decision process. As a result, FAA may not be identifying and funding the most critical or cost-effective research projects. Once an advancement has been developed, FAA may require its use, but significant time may be required before the rule-making process is complete. One factor that contributes to the length of this process is a requirement for cost-benefit analyses to be completed. Time is particularly important when safety is at stake or when the pace of technological development exceeds the pace of rule-making. As a result, some rules may need to be developed quickly to address safety issues or to guide the use of new technologies. However, rules must also be carefully considered before being finalized because they can have a significant impact on individuals, industries, the economy, and the environment. External pressures—such as political pressure generated by highly publicized accidents, recommendations by NTSB, and congressional mandates—as well as internal pressures, such as changes in management’s emphasis, continue to add to and shift the agency’s priorities. The rule-making process can be long and complicated and has delayed the implementation of some technological and operational safety improvements, as we reported in July 2001.In that report, we reviewed 76 significant rules in FAA’s workload for fiscal years 1995 through 2000—10 of the 76 were directly related to improving the safety and health of cabin occupants. Table 3 details the status or disposition of these 10 rules. The shortest rule-making action took 1 year, 11 months (for child restraint systems), and the longest took 10 years, 1 month (for the type and number of emergency exits). However, one proposed rule was still pending after 15 years, while three others were terminated or withdrawn after 9 years or more. Of the 76 significant rules we reviewed, FAA completed the rule- making process for 29 of them between fiscal year 1995 and fiscal year 2000, in a median time of about 2 ½ years to proceed from formal initiation of the rule-making process through publication of the final rule; however, FAA took 10 years or more to move from formal initiation of the rule- making process through publication of the final rule for 6 of these 29 rules. FAA and its international counterparts, such as the European Joint Aviation Authorities (JAA), impose a number of requirements to improve safety. At times, these requirements differ, and efforts are needed to reach agreement on procedures and equipment across country borders. In the absence of such agreements, the airlines generally must adopt measures to implement whichever requirement is more stringent. In 1992, FAA and JAA began harmonizing their requirements for (1) the design, manufacture, operation, and maintenance of civil aircraft and related product parts; (2) noise and emissions from aircraft; and (3) flight crew licensing. Harmonizing the U.S. Federal Aviation Regulations with the European Joint Aviation Regulations is viewed by FAA as its most comprehensive long-term rule-making effort and is considered critical to ensuring common safety standards and minimizing the economic burden on the aviation industry that can result from redundant inspection, evaluation, and testing requirements. According to both FAA and JAA, the process they have used to date to harmonize their requirements for commercial aircraft has not effectively prioritized their joint recommendations for harmonizing U.S. and European aviation requirements, and led to many recommendations going unpublished for years. This includes a backlog of over 130 new rule-making efforts. The slowness of this process led the United States and Europe to develop a new rule-making process to prioritize safety initiatives, focus the aviation industry’s and their own limited resources, and establish limitations on rule-making capabilities. Accordingly, in March 2003, FAA and JAA developed a draft joint “priority” rule-making list; collected and considered industry input; and coordinated with FAA’s, JAA’s, and Transport Canada Civil Aviation’s management. This effort has resulted in a rule-making list of 26 priority projects. In June 2003, at the 20th Annual JAA/FAA International Conference, FAA, JAA, and Transport Canada Civil Aviation discussed the need to, among other things, support the joint priority rule-making list and to establish a cycle for updating it—to keep it current and to provide for “pop-up,” or unexpected, rule-making needs. FAA and JAA discussed the need to prioritize rule-making efforts to efficiently achieve aviation safety goals; that they would work from a limited agreed-upon list for future rule-making activities; and that FAA and the European Aviation Safety Agency, which is gradually replacing JAA, should continue with this approach. In the area of cabin occupant safety and heath, some requirements have been harmonized, while others have not. For example, in 1996, JAA changed its rule on floor lighting to allow reflective, glow-in-the-dark material to be used rather than mandating the electrically powered lighting that FAA required. The agency subsequently permitted the use of this material for floor lighting. In addition, FAA finalized a rule in July 2003 to require a new type of insulation designed to delay fire burning though the fuselage into the cabin during an accident. JAA favors a performance-based standard that would specify a minimum delay in burn-through time, but allow the use of different technologies to achieve the standard. FAA officials said that the agency would consider other technologies besides insulation to achieve burn-through protection but that it would be the responsibility of the applicant to demonstrate that the technology provided performance equivalent to that stipulated in the insulation rule. JAA officials told us that these are examples of the types of issues that must be resolved when they work to harmonize their requirements with FAA’s. These officials added that this process is typically very time consuming and has allowed for harmonizing about five rules per year. After an advancement has been developed, shown to be beneficial, certified, and required by FAA, the airlines or manufacturers need time to implement or install the advancement. FAA generally gives the airlines or manufacturers a window of time to comply with its rules. For example, FAA gave air carriers 5 years to replace metalized Mylar® insulation on specific aircraft with a less flammable insulation type, and FAA’s proposed rule-making on 16g seats would give the airlines 14 years to install these seats in all existing commercial airliners. ATA officials told us that this would require replacement of 496,000 seats. The airline industry’s recent financial hardships may also delay the adoption of advancements. Recently, two major U.S. carriers filed for bankruptcy, and events such as the war in Iraq have reduced passenger demand and airline revenues below levels already diminished by the events of September 11, 2001, and the economic downturn. Current U.S. demand for air travel remains below fiscal year 2000 levels. As a result, airlines may ask for exemptions from some requirements or extensions of time to install advancements. While implementing new safety and health advancements can be costly for the airlines, making these changes could improve the public’s confidence in the overall safety of air travel. In addition, some aviation experts in Europe told us that health-related cabin improvements, particularly improvements in air quality, are of high interest to Europeans and would likely be used in the near future by some European air carriers to set themselves apart from their competitors. For fiscal year 2003, FAA and NASA allocated about $16.2 million to cabin occupant safety and health research. FAA’s share of this research represented $13.1 million, or about 9 percent of the agency’s Research, Engineering, and Development budget of $148 million for fiscal year 2003. Given the level of funding allocated to this research effort, it is important to ensure that the best research projects are selected. However, FAA’s processes for setting research priorities and selecting projects for further research are hampered by data limitations. In particular, FAA lacks certain autopsy and survivor information from aircraft crashes that could help it identify and target research to the most important causes of death and injury in an airliner crash. In addition, for the proposed research projects, the agency does not (1) develop comparable cost data for potential advancements or (2) assess their potential effectiveness in minimizing injuries or saving lives. Such cost and effectiveness data would provide a valuable supplement to FAA’s current process for setting research priorities and selecting projects for funding. Both FAA and NASA conduct research on aircraft cabin occupant safety and health issues. The Civil Aeromedical Institute (CAMI) and the Hughes Technical Center are FAA’s primary facilities for conducting research in this area. In addition, two facilities at NASA, the Langley and Glenn research centers, have also conducted research in this area. As figure 3 shows, federal funding for this research since fiscal year 2000, reached a high in fiscal year 2002, at about $17 million, and fell to about $16.2 million in fiscal year 2003. The administration’s proposal for fiscal year 2004 calls for a further reduction to $15.9 million. This funding covers the expenses of researchers at these facilities and of the contracts they may have with others to conduct research. In addition, NASA recently decided to end its crash research at Langley and to close a drop test facility that it operates in Hampton, Virginia. In fiscal year 2003, FAA and NASA both supported research projects, including aircraft impact, fire, evacuation, and health. As figure 4 shows, most of the funding for cabin occupant safety and health research has gone to fire-related projects. To establish research priorities and select projects to fund, FAA uses a multistep process. First, within each budget cycle, a number of Technical Community Representative Group subcommittees from within FAA generate research ideas. Various subcommittees have responsibility for identifying potential safety and health projects, including subcommittees on crash dynamics, fire safety, structural integrity, passenger evacuation, aeromedical, and fuel safety. Each subcommittee proposes research projects to review committees, which prioritize the projects. The projects are considered and weighted according to the extent to which they address (1) accident prevention, (2) accident survival, (3) external requests for research, (4) internal requests for research, and (5) technology research needs. In addition, the cost of the proposed research is considered before arriving at a final list of projects. The prioritized list is then considered by the Program Planning Team, which reviews the projects from a policy perspective. Although the primary causes of death and injury in commercial airliner crashes are known to be impact, fire, and impediments to evacuation, FAA does not have as detailed an understanding as it would like of the critical factors affecting survival in a crash. According to FAA officials, obtaining a more detailed understanding of these factors would assist them in setting research priorities and in evaluating the relative importance of competing research proposals. To obtain a more detailed understanding of the critical factors affecting survival, FAA believes that it needs additional information from passenger autopsies and from passengers who survived. With this information, FAA could then regulate safety more effectively, airplane and equipment designers could build safer aircraft, including cabin interiors, and more passengers could survive future accidents as equipment became safer. While FAA has independent authority to investigate commercial airliner crashes, NTSB generally controls access to the accident investigation site in pursuit of its primary mission of determining the cause of the crash. When NTSB concludes its investigation, it returns the airplane to its owner and keeps the records of the investigation, including the autopsy reports and the information from survivors that NTSB obtains from medical authorities and through interviews or questionnaires. NTSB makes summary information on the crashes publicly available on its Web site, but according to the FAA researchers, this information is not detailed enough for their needs. For example, the researchers would like to develop a complete autopsy database that would allow them to look for common trends in accidents, among other things. In addition, the researchers would like to know where survivors sat on the airplane, what routes they took to exit, what problems they encountered, and what injuries they sustained. This information would help the researchers analyze factors that might have an impact on survival. According to the NTSB’s Chief of the Survival Factors Division in the Office of Aviation Safety, NTSB provides information on the causes of death and a description of injuries in the information they make publicly available. In addition, although medical records and autopsy reports are not made public, interviews with and questionnaires from survivors are available from the public docket. NTSB’s Medical Officer was unaware of any formal requests from the FAA for the NTSB to provide them with copies of this type of information, although the FAA had previously been invited to review such information at NTSB headquarters. He added that the Board would likely consider a formal request from FAA for copies of autopsy reports and certain survivor records, but that it was also likely that the FAA would have to assure NTSB that the information would be appropriately safeguarded. According to FAA officials, close cooperation between the NTSB and the FAA is needed for continued progress in aviation safety. Besides lacking detailed information on the causes of death and injury, FAA does not develop data on the cost to implement advancements that are comparable for each, nor does it assess the potential effectiveness of each advancement in reducing injuries and saving lives. Specifically, FAA does not conduct cost-benefit analyses as part of its multistep process for setting research priorities. Making cost estimates of competing advancements would allow direct comparisons across alternatives, which, when combined with comparable estimates of effectiveness, would provide valuable supplemental information to decision makers when setting research priorities. FAA considers its current process to be appropriate and sufficient. In commenting on a draft of this report, FAA noted that it is very difficult to develop realistic cost data for advancements during the earliest stages of research. The agency cautioned that if too much emphasis is placed on cost/benefit analyses, potentially valuable research may not be undertaken. Recognizing that it is less difficult to develop cost and effectiveness information as research progresses, we are recommending that FAA develop and use cost and effectiveness analyses to supplement its current process. At later stages in the development process, we found that this information can be developed fairly easily through cost and effectiveness analyses using currently available data. For example, we performed an analysis of the cost to implement inflatable lap seat belts using a cost analysis methodology we developed (see app. VIII). This analysis allowed us to estimate how much this advancement would cost per airplane and per passenger trip. Such cost analyses could be combined with similar analyses of effectiveness to identify the most cost-effective projects, based on their potential to minimize injuries and reduce fatalities. Potential sources of effectiveness data include FAA, academia, industry, and other aviation authorities. Although FAA and the aviation community are pursuing a number of advancements to enhance commercial airliners’ cabin occupant safety and health, several factors have slowed their implementation. For example, for advancements that are currently available but are not yet implemented or installed, progress is slowed by the length of time it takes for FAA to complete its rule-making process, for the U.S and foreign countries to agree on the same requirements, and for the airlines to actually install the advancements after FAA has required them. In addition, FAA’s multistep process for identifying potential cabin occupant safety and health research projects and allocating its limited research funding is hampered by the lack of autopsy and survivor information from airliner crashes and by the lack of cost and effectiveness analysis. Given the level of funding allocated to cabin occupant safety and health research, it is important for FAA to ensure that this funding is targeting the advancements that address the most critical needs and show the most promise for improving the safety and health of cabin occupants. However, because FAA lacks detailed autopsy and survivor information, it is hampered in its ability to identify the principal causes of death and survival in commercial airliner crashes. Without an agreement with the National Transportation Safety Board (NTSB) to receive detailed autopsy and survivor information, FAA lacks information that could be helpful in understanding the factors that contribute to surviving a crash. Furthermore, because FAA does not develop comparable estimates of cost and effectiveness of competing research projects, it cannot ensure that it is funding those technologies with the most promise of saving lives and reducing injuries. Such cost and effectiveness data would provide a valuable supplement to FAA’s current process for setting research priorities and selecting projects for funding. To facilitate FAA’s development of comparable cost data across advancements, we developed a cost analysis methodology that could be combined with a similar analysis of effectiveness to identify the most cost- effective projects. Using comparable cost and effectiveness data across the range of advancements would position the agency to choose more effectively between competing advancements, taking into account estimates of the number of injuries and fatalities that each advancement might prevent for the dollars invested. In turn, FAA would have more assurance that the level of funding allocated to this effort maximizes the safety and health of the traveling public and the cabin crew members who serve them. To provide FAA decision makers with additional data for use in setting priorities for research on cabin occupant safety and health and in selecting competing research projects for funding, we recommend that the Secretary of Transportation direct the FAA Administrator to initiate discussions with the National Transportation Safety Board in an effort to obtain the autopsy and survivor information needed to more fully understand the factors affecting survival in a commercial airliner crash and supplement its current process by developing and using comparable estimates of cost and effectiveness for each cabin occupant safety and health advancement under consideration for research funding. Agency Comments and We provided copies of a draft of this report to the Department of Our Evaluation Transportation for its review and comment. FAA generally agreed with the report’s contents and its recommendations. The agency provided us with oral comments, primarily technical clarifications, which we have incorporated as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days after the date of this letter. At that time, we will send copies to the appropriate congressional committees; the Secretary of Transportation; the Administrator, FAA; and the Chairman, NTSB. We will also make copies available to others upon request. In addition, this report is also available at no charge on GAO’s Web site at http://www.gao.gov. As requested by the Ranking Democratic Member, House Committee on Transportation and Infrastructure, we addressed the following questions: (1) What regulatory actions has the Federal Aviation Administration (FAA) taken, and what key advancements are available or being developed by FAA and others to address safety and health issues faced by passengers and flight attendants in large commercial airliner cabins? (2) What factors, if any, slow the implementation of advancements in cabin occupant safety and health? In addition, as requested, we identified some factors affecting efforts by Canada and Europe to improve cabin occupant safety and health. The scope of our report includes the cabins of large commercial aircraft (those that carry 30 or more passengers) operated by U.S. domestic commercial airlines and addresses the safety and health of passengers and flight attendants from the time they board the airliner until they disembark under normal operational conditions or emergency situations. This report identifies cabin occupant safety and health advancements (technological or operational improvements) that could be implemented, primarily through FAA’s rule-making process. Such improvements include technological changes designed to increase the overall safety of commercial aviation as well as changes to enhance operational safety. The report does not include information on the flight decks of large commercial airliners or safety and health issues affecting flight deck crews (pilots and flight engineers), because they face some issues not faced by cabin occupants. It also does not address general aviation and corporate aircraft or aviation security issues, such as hijackings, sabotage, or terrorist activities. To identify regulatory actions that FAA has taken to address safety and health issues faced by passengers and flight attendants in large commercial airliner cabins, we interviewed and collected documentation from U.S. federal agency officials on major safety and health efforts completed by FAA. The information we obtained included key dates and efforts related to cabin occupant safety and health, such as rule-makings, airworthiness directives, and Advisory Circulars. To identify key advancements that are available or are being developed by FAA and others to address safety and health issues faced by passengers and flight attendants in large commercial airliner cabins, we consulted experts (1) to help ensure that we had included the advancements holding the most promise for improving safety and health; and (2) to help us structure an evaluation of selected advancements (i.e., confirm that we had included the critical benefits and drawbacks of the potential advancements) and develop a descriptive analysis for them, where appropriate, including their benefits, costs, technology readiness levels, and regulatory status. In addition, we interviewed and obtained documentation from federal agency officials and other aviation safety experts at the Federal Aviation Administration (including its headquarters in Washington, D.C.; Transport Airplane Directorate in Renton, Washington; William J. Hughes Technical Center in Atlantic City, New Jersey; and Mike Monroney Aeronautical Center/Civil Aerospace Medical Institute in Oklahoma City, Oklahoma); National Transportation Safety Board; National Aeronautics and Space Administration (NASA); Air Transport Association; Regional Airline Association; International Air Transport Association; Aerospace Industries Association; Aerospace Medical Association; Flight Safety Foundation, Association of Flight Attendants; Boeing Commercial Airplane Group; Airbus; Cranfield University, United Kingdom; University of Greenwich, United Kingdom; National Aerospace Laboratory, Netherlands; Joint Aviation Authorities, Netherlands; Civil Aviation, Netherlands; Civil Aviation Authority, United Kingdom; RGW Cherry and Associates; Air Accidents Investigations Branch, United Kingdom; Syndicat National du Personnel Navigant Commercial (French cabin crew union) and ITF Cabin Crew Committee, France; BEA (comparable to the U.S. NTSB), France; and the Direction Générale de l’Aviation Civile (DGAC), FAA’s French counterpart. To describe the status of key advancements that are available or under development, we used NASA’s technology readiness levels (TRL). These levels form a system for ranking the maturity of particular technologies and are as follows: TRL 1: Basic principles observed and reported TRL 2: Technology concept and/or application formulated TRL 3: Analytical and experimental critical function and/or TRL 4: Component validation in laboratory environment TRL 5: Component and/or validation in relevant environment TRL 6: System or subsystem model or prototype demonstrated in a TRL 7: System prototype demonstrated in a space environment TRL 8: Actual system completed and “flight qualified” through test and TRL 9: Actual system “flight proven” through successful mission To determine what factors, if any, slow the implementation of advancements in cabin occupant safety and health, we reviewed the relevant literature and interviewed and analyzed documentation from the U.S. federal officials cited above for the 18 key regulatory actions FAA has taken since 1984 to improve the safety and health of cabin occupants. We used this same approach to assess the regulatory status of the 28 advancements we reviewed that are either currently available, but not yet implemented or installed, or require further research to demonstrate their effectiveness or lower their costs. In identifying 28 advancements, GAO is not suggesting that these are the only advancements being pursued; rather, these advancements have been recognized by aviation safety experts we contacted as offering promise for improving the safety and health of cabin occupants. To determine how long it generally takes for FAA to issue new rules, in addition to speaking with FAA officials, we relied on past GAO work and updated it, as necessary. In order to examine the effect of FAA and European efforts to harmonize their aviation safety requirements, we interviewed and analyzed documentation from aviation safety officials and other experts in the United States, Canada, and Europe. Furthermore, to examine the factors affecting airlines’ ability to implement or install advancements after FAA requires them, we interviewed and analyzed documentation from aircraft manufacturers, ATA, and FAA officials. In addition, to determine what factors slow implementation we examined FAA’s processes for selecting research projects to improve cabin occupant safety and health. In examining whether FAA has sufficient data upon which to base its research priorities, we interviewed FAA and National Transportation Safety Board (NTSB) officials about autopsy and survivor information from commercial airliner accidents. We also examined the use of cost and effectiveness data in FAA’s research selection process for cabin occupant safety and health projects. To facilitate FAA’s development of such cost estimates, we developed a cost analysis methodology to illustrate how the agency could do this. Specifically, we developed a cost analysis for inflatable lap belts to show how data on key cost variables could be obtained from a variety of sources. We selected lap belts because they were being used in limited situations and appeared to offer some measure of improved safety. Information on installation price, annual maintenance and refurbishment costs, and added weight of these belts was obtained from belt manufacturers. We obtained information from FAA and the Department of Transportation’s (DOT) Bureau of Transportation Statistics on a number of cost variables, including historical jet fuel prices, the impact on jet fuel consumption of carrying additional weight, the average number of hours flown per year, the average number of seats per airplane, the number of airplanes in the U.S. fleet, and the number of passenger tickets issued per year. To account for variation in the values of these cost variables, we performed a Monte Carlo simulation. In this simulation, values were randomly drawn 10,000 times from probability distributions characterizing possible values for the number of seat belts per airplane, seat installation price, jet fuel price, number of passenger tickets, number of airplanes, and hours flown. This simulation resulted in forecasts of the life-cycle cost per airplane, the annualized cost per airplane, and the cost per ticket. There is uncertainty in estimating the number of lives potentially saved and their value because accidents occur infrequently and unpredictably. Such estimates could be higher or lower, depending on the number and severity of accidents during a given analysis period and the value placed on a human life. To identify factors affecting efforts by Canada and Europe to improve cabin occupant safety and health we interviewed and collected documentation from aviation safety experts in the United States, Canada, and Europe. We provided segments of a draft of this report to selected external experts to help ensure its accuracy and completeness. These included the Air Transport Association, National Transportation Safety Board, Boeing, Airbus, and aviation authorities in the United Kingdom, France, Canada and the European Union. We incorporated their comments, as appropriate. The European Union did not provide comments. We conducted our review from January 2002 through September 2003 in accordance with generally accepted government auditing standards. The United States, Canada, and members of the European Community are parties to the International Civil Aviation Organization (ICAO), established under the Chicago Convention of 1944, which sets minimum standards and recommended practices for civil aviation. In turn, individual nations implement aviation standards, including those for aviation safety. While ICAO’s standards and practices are intended to keep aircraft, crews, and passengers safe, some also address environmental conditions in aircraft cabins that could affect the health of passengers and crews. For example, ICAO has standards for preventing the spread of disease and for spraying aircraft cabins with pesticides to remove disease-carrying insects. In Canada, FAA’s counterpart for aviation regulations and oversight is Transport Canada Civil Aviation, which sets standards and regulations for the safe manufacture, operation, and maintenance of aircraft in Canada. In addition, Transport Canada Civil Aviation administers, enforces, and promotes the Aviation Occupational Health and Safety Program to help ensure the safety and health of crewmembers on board aircraft. The department also sets the training and licensing standards for aviation professionals in Canada, including air traffic controllers, pilots, and aircraft maintenance engineers. Transport Canada Civil Aviation has more than 800 inspectors working with Canadian airline operators, aircraft manufacturers, airport operators, and air navigation service providers to maintain the safety of Canada’s aviation system. These inspectors monitor, inspect, and audit Canadian aviation companies to verify their compliance with Transport Canada’s aviation regulations and standards for pilot licensing, aircraft certification, and aircraft operation. To assess and recommend potential changes to Canada’s aviation regulations and standards, the Canadian Aviation Regulation Advisory Council was established. This Council is a joint initiative between government and the aviation community. The Council supports regulatory meetings and technical working groups, which members of the aviation community can attend. A number of nongovernmental organizations— including airline operators, aviation labor organizations, manufacturers, industry associations, and groups representing the public—are members. The Transportation Safety Board (TSB) of Canada is similar to NTSB in the United States. TSB is a federal agency that operates independently of Transport Canada Civil Aviation. Its mandate is to advance safety in the areas of marine, pipeline, rail, and aviation transportation by conducting independent investigations, including public inquiries when necessary, into selected transportation occurrences in order to make findings as to their causes and contributing factors; identifying safety deficiencies, as evidenced by transportation occurrences; making recommendations designed to reduce or eliminate any such reporting publicly on their investigations and findings. Under its mandate to conduct investigations, TSB conducts safety-issue- related investigations and studies. It also maintains a mandatory incident- reporting system for all modes of transportation. TSB and Transport Canada Civil Aviation use the statistics derived from this information to track potential safety concerns in Canada’s transportation system. TSB investigates aircraft accidents that occur in Canada or involve aircraft built there. Like NTSB, the Transportation Safety Board can recommend air safety improvements to Transport Canada Civil Aviation. Europe supplements the ICAO framework with the European Civil Aviation Conference, an informal forum through which 38 European countries formulate policy on civil aviation issues, including safety, but do not explicitly address passenger health issues. In addition, the European Union issues legislation concerning aviation safety, certification, and licensing requirements but has not adopted legislation specifically related to passenger health. One European directive requires that all member states assess and limit crewmembers’ exposure to radiation from their flight duties and provide them with information on the effects of such radiation exposure. The European Commission is also providing flight crewmembers and other mobile workers with free health assessments prior to employment, with follow-up health assessments at regular intervals. Another European supplement to the ICAO framework is the Joint Aviation Authorities (JAA), which represents the civil aviation regulatory authorities of a number of European states that have agreed to cooperate in developing and implementing common safety regulatory standards and procedures. JAA uses staff of these authorities to carry out its responsibilities for making, standardizing, and harmonizing aviation rules, including those for aviation safety, and for consolidating common standards among member counties. In addition, JAA is to cooperate with other regional organizations or national European state authorities to reach at least JAA’s safety level and to foster the worldwide implementation of harmonized safety standards and requirements through the conclusion of international arrangements. Membership in JAA is open to members of the European Civil Aviation Conference, which currently consists of 41 member countries. Currently, 37 countries are members or candidate members of JAA. JAA is funded by national contributions; income from the sale of publications and training; and income from other sources, such as user charges and European Union grants. National contributions are based on indexes related to the size of each country’s aviation industry. The “largest” countries (France, Germany, and the United Kingdom) each pay around 16 percent and the smallest around 0.6 percent of the total contribution income. For 2003, JAA’s total budget was about 6.6 million euros. In early 1998, JAA launched the Safety Strategy Initiative to develop a focused safety agenda to support the “continuous improvement of its effective safety system” and further reduce the annual number of accidents and fatalities regardless of the growth of air traffic. Two approaches are being used to develop the agenda: The “historic approach” is based on analyses of past accidents and has led to the identification of seven initial focus areas—controlled flight into terrain, approach and landing, loss of control, design related, weather, occupant safety and survivability, and runway safety. The “predictive approach” or “future hazards approach” is based on an identification of changes in the aviation system. JAA is cooperating in this effort with FAA and other regulatory bodies to develop a worldwide safety agenda and avoid duplication of effort. FAA has taken the lead in the historic approach, and JAA has taken the lead in the future hazards approach. JAA officials told us that they use a consensus-based process to develop rules for aviation safety, including cabin occupant safety and health-related issues. Reaching consensus among member states is time consuming, but the officials said the time invested was worthwhile. Besides making aviation-related decisions, JAA identifies and resolves differences in word meanings and subtleties across languages—an effort that is critical to reaching consensus. JAA does not have regulatory rule-making authority. Once the member states are in agreement, each member state’s legislative authority must adopt the new requirements. Harmonizing new requirements with U.S. and other international aviation authorities further adds to the time required to implement new requirements. According to JAA officials, they use expert judgment to identify and prioritize research and development efforts for aviation safety, including airliner cabin occupant safety and health issues, but JAA plans to move toward a more data-driven approach. While JAA has no funding of its own for research and development, it recommends research priorities to its member states. However, JAA officials told us that member states’ research and development efforts are often driven by recent airliner accidents in the member states, rather than by JAA’s priorities. The planned shift from expert judgment to a more data-driven approach will require more coordination of aviation research and development across Europe. For example, in January 2001, a stakeholder group formed by the European Commissioner for Research issued a planning document entitled European Aeronautics: A Vision for 2020, which, among other things, characterized European aeronautics as a cross-border industry, whose research strategy is shaped within national borders, leading to fragmentation rather than coherence. The document called for better decision-making and more efficient and effective research by the European Union, its member states, and aeronautics stakeholders. JAA officials concurred with this characterization of European aviation research and development. Changes lie ahead for JAA and aviation safety in Europe. The European Union recently created a European Aviation Safety Agency, which will gradually assume responsibility for rule-making, certification, and standardization of the application of rules by the national aviation authorities. This organization will eventually absorb all of JAA’s functions and activities. The full transition from JAA to the safety agency will take several years--per the regulation,the European Aviation Safety Agency must begin operations by September 28, 2003, and transition to full operations by March 2007. To improve the aircraft be subjected to more rigorous testing crashworthiness of than was previously required. The tests subject airplane seats and seats to the forward, downward, and other directional movements that can occur in an accident. Likely injuries under various conditions are estimated by using instrumented crash test dummies. This rule was published on May 17, 1988, and became effective June 16, 1988. However, only the their ability to prevent newest generation of airplanes is or reduce the severity required to have fully tested and of head, back, and femur injuries. certificated 16g seats. FAA proposed a retrofit rule on October 4, 2002, to phase in 16g seats fleetwide within 14 years after adoption of the final rule. FAA issued an airworthiness directive requiring To improve the corrective action for overhead bin designs found crashworthiness of not to meet the existing rules. some bins after failures were observed in a 1989 crash in Kegworth, England. The airworthiness directive to improve bin connectors became effective November 20, 1992, and applied to Boeing 737 and 757 aircraft. In 1986, FAA upgraded the fire safety standards To give airliner cabin FAA required that all commercial for cabin interior materials in transport airplanes, establishing a new test method to determine the heat release from materials exposed to radiant heat and set allowable criteria for heat release rates. occupants more time aircraft produced after August 20, to evacuate a burning 1988, have panels that exhibit airplane by limiting heat releases and smoke emissions when cabin interior materials are exposed to fire. reduced heat releases and smoke emissions to delay the onset of flashover. Although there was no retrofit of the existing fleet, FAA is requiring that these improved materials be used whenever the cabin is substantially refurbished. In 1984, FAA issued a regulation that enhanced To retard burning of flammability requirements for seat cushions. cabin materials to increase evacuation 26, 1987. time. To extinguish in-flight This rule became effective April fires. 29, 1985, and required compliance by April 29, 1986. In March 1985, FAA issued a rule requiring air To identify and carriers to install smoke detectors in lavatories extinguish in-flight within 18 months. fires. This rule became effective on April 29, 1985, and required compliance by October 29, 1986. In March 1985, FAA required air carriers to install automatic fire extinguishers in the waste extinguish prevent in- 29, 1985. paper bins in all aircraft lavatories. flight fires. This rule became effective on April This rule required compliance by April 29, 1987. In 1986, FAA upgraded the airworthiness standards for ceiling and sidewall liner panels used in cargo compartments of transport category airplanes. To improve fire safety This rule required compliance on in the cargo and March 20, 1998. baggage compartment of certain transport airplanes. In 1998, FAA required air carriers to retrofit the To improve fire safety This rule became effective March and suppression systems in certain cargo compartments. This rule applied to over 3,400 airplanes in service and all newly manufactured certain transport airplanes. airplanes. in the cargo and baggage compartment of 19, 1998, requiring compliance on March 20, 2001. This rule requires improved access to the Type To help ensure that III emergency exits (typically smaller, overwing passengers have an exits) by providing an unobstructed unobstructed passageway to the exit. Transport aircraft with passageway to exits 60 or more passenger seats were required to during an comply with the new standards emergency. This rule became effective June 3, 1992, requiring changes to be made by December 3, 1992. Public address system: independent power source system be independently powered for at least 10 minutes and that at least 5 minutes of that time is during announcements. To eliminate reliance This rule became effective on engine or auxiliary-power-unit operation for emergency announcements. November 27, 1989, for air carrier and air taxi airplanes manufactured on or after November 27, 1990. This rule requires that persons seated next to emergency exits be physically and mentally capable of operating the exit and assisting other evacuation in an passengers in emergency evacuations. emergency. This rule became effective April 5, 1990, requiring compliance by October 5, 1990. Rule issued to limit the distance between adjacent emergency exits on transport airplanes to 60 feet. To improve passenger evacuation in an emergency. This rule became effective July 24, 1989, imposing requirements on airplanes manufactured after October 16, 1987. Floor proximity emergency Airplane emergency lighting systems must escape path marking visually identify the emergency escape path and identify each exit from the escape path. To improve passenger evacuation when smoke obscures overhead lighting. This rule became effective November 26, 1984, requiring implementation for large transport airplanes by November 26, 1986. Emergency evacuation slides manufactured after December 3, 1984, must be fire resistant and comply with new radiant heat testing procedures. To improve passenger evacuation. This technical standard became effective for all evacuation slides manufactured after December 3, 1984. In 1986, FAA issued a rule requiring commercial airlines to carry emergency medical carriers’ preparation 1, 1986, requiring compliance as kits. of that date. This rule became effective August for in-flight emergencies. In June 1995, following two serious events involving turbulence, FAA issued a public advisory to airlines urging the use of seat belts at all times when passengers are seated but concluded that existing rules did not require strengthening. To prevent passenger injuries from turbulence by increasing public awareness of the importance of wearing seatbelts. Information is currently posted on FAA’s Web site. In May 2000, FAA instituted the Turbulence Happens public awareness campaign. Technical Class C category cargo compartments are required to have built-in extinguishing systems to control fire in lieu of crewmember accessibility. Class D category cargo compartments are required to completely contain a fire without endangering the safety of the airplane occupants. This appendix presents information on the background and status of potential advancements in impact safety that we identified, including the following: retrofitting all commercial aircraft with more advanced seats, improving the ability of airplane floors to hold seats in an accident, preventing overhead luggage bins from becoming detached or opening, requiring child safety restraints for children under 40 pounds, and installing lap belts with self-contained inflatable air bags. In commercial transport airplanes, the ability of a seat to protect a passenger from the forces of impact in an accident depends on reducing the forces of impact to levels that a person can withstand, either by spreading the impact over a larger part of the person’s body or by decreasing the duration of the impact through the use of energy-absorbing seats, an energy-absorbing fuselage and floors, or restraints such as seat belts or inflatable seat belt air bags adapted from automobile technology. In a 1996 study by R.G.W Cherry & Associates, enhancing occupant restraint was ranked as the second most important of 33 potential ways to improve air crash survivability. Boeing officials noted that the industry generally agrees with this view but that FAA and the industry are at odds over the means of implementing these changes. According to an aviation safety expert, seats and restraints should be considered as a system that involves the seats themselves, seat restraints such as seat belts, seat connections to the floor, the spacing between seats, and furnishings in the cabin area that occupants could strike in an accident. To protect the occupant, a seat must not only absorb energy well but also stay attached to the floor of the aircraft. In other words, the “tie-down” chain must remain intact. Although aircraft seat systems are designed to withstand about 9 to 16 times the force of gravity, the limits of human tolerance to impact substantially exceed the aircraft and seat design limits. A number of seat and restraint devices have been shown in testing to improve survivability in aviation accidents. Several options are to retrofit the entire current fleet with fully tested 16g seats, use rearward-facing seats, require three-point auto-style seat belts with shoulder harnesses, and install auto-style air bags. FAA regulations require seats for newly certified airplane designs to pass more extensive tests than were previously required to protect occupants from impact forces of up to 16 times the force of normal gravity in the forward direction; seat certification standards include specific requirements to protect against head, spine, and leg injuries (see fig. 5).FAA first required 16g seats and tests for newly designed, certificated airplanes in 1988; new versions of existing designs were not required to carry 16g seats. Since 1988, however, in anticipation of a fleetwide retrofit rule, manufacturers have increasingly equipped new airplanes with “16g- compatible” seats that have some of the characteristics of fully certified 16g seats. Certifying a narrow-body airplane type to full 16g seat certification standards can cost $250,000. In 1998 FAA estimated that 16g seats would avoid between about 210 to 410 fatalities and 220 to 240 serious injuries over the 20-year period from 1999 through 2018. A 2000 study funded by FAA and the British Civil Aviation Authority estimated that if 16g seats had been installed in all airplanes that crashed from 1984 through 1998, between 23 to 51 fewer U.S. fatalities and 18 to 54 fewer U.S. serious injuries would have occurred over the period. A number of accidents analyzed in that study showed no benefit from 16g seats because it was assumed that 16g seats would have detached from the floor, offering no additional benefits compared with older seats.Worldwide, the study estimated, about 333 fewer fatalities and 354 fewer serious injuries would have occurred during the period had the improved seats been installed. Moreover, if fire risks had been reduced, the estimated benefits of 16g seats might have increased dramatically, as more occupants who were assumed to survive the impact but die in the ensuing fire would then have survived both the impact and fire. Seats that meet the 16g certification requirements are currently available and have been required on newly certificated aircraft designs since 1988. However, newly manufactured airplanes of older certification, such as Boeing 737s, 757s, or 767s, were not required to be equipped with 16g certified seats. Recently, FAA has negotiated with manufacturers to install full 16g seats on new versions of older designs, such as all newly produced 737s.In October 2002, FAA published a new proposal to create a timetable for all airplanes to carry fully certified 16g seats within 14 years. The comment period for the currently proposed rule ended in March 2003. Under this proposal, airframe manufacturers would have 4 years to begin installing 16g seats in newly manufactured aircraft only, and all airplanes would have to be equipped with full 16g seats within 14 years or when scheduled for normal seat replacement. FAA estimated that upgrading passenger and flight attendant seats to meet full 16g requirements would avert approximately 114 fatalities and 133 serious injuries over 20 years following the effective date of the rule. This includes 36 deaths that would be prevented by improvements to flight attendant seats that would permit attendants to survive the impact and to assist more passengers in an evacuation. FAA estimated the costs to avert 114 fatalities and 133 serious injuries at $245 million in present-value terms, or $519 million in overall costs, which, according to FAA’s analysis, would approximate the monetary benefits from the seats.FAA estimated that about 7.5 percent of airplane seats would have to be replaced before they would ordinarily be scheduled for replacement. FAA’s October 2002 proposal divides seats into three classes according to their approximate performance level. Although FAA does not know how many seats of each type seat are in service, it estimates that about 44 percent of commercial-service aircraft are equipped with full 16g seats, 55 percent have 16g-compatible seats, and about 1 percent have 9g seats. The 16g-compatible or partial 16g seats span a wide range of capabilities; some are nearly identical to full 16g seats but have been labeled as 16g-compatible to avoid more costly certification, and other partial 16g seats offer only minor improvements over the older generation of 9g seats. To determine whether these seats have the same performance characteristics as full 16g seats, it may be sufficient, in some cases, to review the company’s certification paperwork; in other cases, however, full crash testing of actual 16g seats may be necessary to determine the level of protection provided. FAA is currently considering the comments it received on its October 2002 proposal. Industry comments raised concerns about general costs, the costs of retrofitting flight attendant seats, and the possibility that older airplanes designed for 9g seats might require structural changes to accommodate full 16g seats. One comment expressed the desire to give some credit for and “grandfather” in at least some partial 16g seats. In an accident, a passenger’s chances of survival depend on how well the passenger cabin maintains “living space” and the passenger is “tied down” within that space. Many experts and reports have noted floor retention— the ability of the aircraft cabin floor to remain intact and hold the passenger’s seat and restraint system during a crash—as critical to increasing the passenger’s chances of survival. Floor design concepts developed during the late 1940s and 1950s form the basis for the cabin floors found in today’s modern airplanes. Accident investigations have documented failures of the floor system in crashes. New 16g seat requirements were developed in the 1980s. 16g seats were intended to be retrofitted on aircraft with traditional 9g floors and were designed to maximize the capabilities of existing floor strength. While 16g seats might be strong, they could also be inflexible and thus fail if the floor deformed in a crash. Under the current 16g requirement, the seats must remain attached to a deformed seat track and floor structure representative of that used in the airplane. To meet these requirements, the seat was expected to permanently deform to absorb and limit impact forces even if the 16g test conditions were exceeded during a crash. A major accident related to floor deformation occurred at Kegworth, England, in 1989. A Boeing 737-400 airplane flew into an embankment on approach to landing. In total, only 21 of the 52 triple seats—all “16g- compatible” —remained fully attached to the cabin floor; 14 of those that remained attached were in the area where the wing passes through the cabin and the area is stronger than other areas to support the wing. In this section of the airplane, the occupants generally survived, even though they were exposed to an estimated peak level of 26gs. The front part of the airplane was destroyed, including the floor; most of these seats separated from the airplane, killing or seriously injuring the occupants. An FAA expert noted that the impact was too severe for the airplane to maintain its structural integrity and that 16g seats were not designed for an accident of that severity. The British Air Accidents Investigation Branch noted that fewer injuries occurred in the accident than would probably have been the case with earlier-generation seats. However, the Branch also noted that “relatively minor engineering changes could significantly improve the resilience and toughness of cabin floors . . . and take fuller advantage of the improved passenger seats.” The Branch reported that where failures occurred, it was generally the seat track along the floor that failed, and not the seat, and that the rear attachments generally remained engaged with the floor, “at least partially due to the articulated joint built into the rear attachment, an innovation largely stemming from the FAA dynamic test requirements.” The Branch concluded that “seats designed to these dynamic requirements will certainly increase survivability” but “do not necessarily represent an optimum for the long term . . . if matched with cabin floors of improved strength and toughness.” Several reports have recommended structural improvements to floors. A case study of 11 major accidents for which detailed information was available found floor issues to be a major cause of injury or fatalities in 4 accidents and a minor cause in 1 accident. Another study estimated the past benefits of 16g seats in U.S. accidents between 1984 and 1998 and found no hypothetical benefit from 16g seats in a number of accidents because the floor was extensively disrupted during impact. In other words, unless the accidents had been less severe or the floor and seat tracks had been improved beyond the 9g standard on both new and old jets, newer 16g seats would not have offered additional benefits compared with the older seats that were actually on the airplane during the accidents under study. A research program on seat and floor strength was recently conducted by the French civil aviation authority, the Direction Générale de l’Aviation Civile. Initial findings of the research program on seat-floor attachments have not shown dramatic results and showed no rupture or plastic deformation of any cabin floor parts during a 16g test. However, French officials noted that they plan to perform additional tests with more rigid seats. Because many factors are involved it is difficult to identify the interrelated issues and interactions between seats and floors. A possible area for future research, according to French officials, is to examine dynamic floor warping during a crash to improve impact performance. FAA officials said they have no plans to change floor strength requirements. FAA regulations require floors to meet impact forces likely to occur in “emergency landing conditions,” or generally about 9gs of longitudinal static force. According to several experts, stronger floors could improve the performance of 16g seats. In addition, further improvement in seats beyond the 16g standard would likely require improved floors. In an airplane crash, overhead luggage bins in the cabin sometimes detach from their mountings along the ceiling and sidewalls and can fall completely or allow pieces of luggage to fall on passengers’ heads (See fig. 6.). While only a few cases have been reported in which the impact from dislodged overhead bins was the direct cause of a crash fatality or injury, a study for the British Civil Aviation Authority that attempted to identify the specific characteristics of each fatality in 42 fatal accidents estimated that the integrity of overhead bin stowage was the 17th most important of 32 factors used to predict passenger survivability. Maintaining the integrity of bins may also help speed evacuation after a crash. Safer bins have been designed since bin problems were observed in a Boeing 737 accident in Kegworth, England, in 1989, when nearly all the bins failed and fell on passengers. FAA tested bins in response to that accident. The Kegworth bins were certified to the current FAA 9g longitudinal static loading standards, among others. When FAA subsequently conducted longitudinal dynamic loading tests on the types of Boeing bins involved, the bins failed. Several FAA experts said that the overhead bins on 737s had a design flaw. FAA then issued an airworthiness directive that called for modifying all bins on Boeing 737 and 757 aircraft. The connectors for the bins were strengthened in accordance with the airworthiness directive, and the new bins passed FAA’s tests. The British Air Accidents Investigation Branch recommended in 1990 that the performance of both bins and latches be tested more rigorously, including the performance of bins “when subjected to dynamic crash pulses substantially beyond the static load factors currently required.” NTSB has made similar recommendations. Turbulence reportedly injures at least 15 U.S. cabin occupants a year, and possibly over 100. Most of these injuries are to flight attendants who are unrestrained. Some injuries are caused by luggage falling from bins that open in severe turbulence. Estimates of total U.S. airline injuries from bin- related falling luggage range from 1,200 to 4,500 annually, most of which occur during cruising rather than during boarding or disembarking. The study for the British Civil Aviation Authority noted above found that as many as 70 percent of impact-related accidents involve overhead bins that become detached. However, according to the report, bin detachment does not appear to be a major factor in occupants’ survival and data are insufficient to support a specific determination about the mechanism of failure. FAA has conducted several longitudinal and drop tests since the Kegworth accident, including drops of airplane fuselage sections with overhead storage bins installed. A 1993 dynamic vertical drop test showed some varying bin performance problems at about 36gs of downward force. An FAA longitudinal test in 1999 tested two types of bins at 6g, at the 9g FAA certification requirement, and at the 16g level; in the 16g longitudinal test, one of the two bins broke free from its support mountings. In addition to the requirement that they withstand forward (longitudinal) loads of slightly more than 9gs, luggage bins must meet other directional loading requirements. Bin standards are part of the general certification requirements for all onboard objects of mass. FAA officials said that overhead bins no longer present a problem, appear to function as designed, and meet standards. An FAA official told us that problems such as those identified at Kegworth have not appeared in later crashes. Another FAA official said that while Boeing has had some record of bin problems, the problems are occasional and quickly rectified through design changes. Boeing officials told us that the evidence that bins currently have latch problems is anecdotal. Suggestions for making bins safer in an accident include adding features to absorb impact forces and keep bins attached and closed during structural deformation; using dynamic 16g longitudinal impact testing standards similar to those for seats; and storing baggage in alternative compartments in the main cabin, elsewhere in the aircraft, or under seats raised for that purpose. Using a correctly-designed child safety seat that is strapped in an airplane seat offers protection to a child in an accident or turbulence (see fig. 6). By contrast, according to many experts, holding a child under two years old on an adult’s lap, which is permitted, is unsafe for both the child and for other occupants who could be struck by the child in an accident. Requiring child safety seats for infants and small children on airplanes is one of NTSB’s “most wanted” transportation safety improvements. The British Air Accidents Investigation Branch made similar recommendations, as did a 1997 White House Commission report on aviation. An FAA analysis of survivable accidents from 1978 through 1994 found that 9 deaths, 4 major injuries, and 8 minor injuries to children occurred. The analysis also found that the use of child safety seats would have prevented 5 deaths, all the major injuries, and 4 to 6 of the minor injuries. Child safety advocates have pointed to several survivable accidents in which children died—a 1994 Charlotte, North Carolina, crash; a 1990 Cove Neck, New York, accident; and a 1987 Denver, Colorado, accident—as evidence of the need for regulation. A 1992 FAA rule required airlines to allow child restraint systems, but FAA has opposed mandatory child safety seats on the basis of studies showing that requiring adults to pay for children’s seats would induce more car travel, which the study said was more dangerous for children than airplane travel. One study published in 1995 by DOT estimated that if families were charged full fares for children’s seats, 20 percent would choose other modes of transportation, resulting in a net increase of 82 deaths among children and adults over 10 years. If child safety seats are required, airlines may require adults wishing to use child safety seats to purchase an extra seat for the child’s safety seat. FAA officials told us that they could not require that the seat next to a parent be kept open for a nonpaying child. However, NTSB has testified that the scenarios for passengers taking other modes of transportation are flawed because FAA assumed that airlines would charge full fares for infants currently traveling free. NTSB noted in 1996 that airlines would offer various discounts and free seats for infants in order to retain $6 billion in revenue that would otherwise be lost to auto travel. Airlines have already responded to parents who choose to use child restraint systems with scheduling flexibility, and many major airlines offer a 50 percent discount off any fare for a child under 2 to travel in an approved child safety seat. The 1995 DOT study, however, estimated that even if a child’s seat on an airplane were discounted 75 percent, some families would still choose car travel and that the choice by those families to drive instead of fly would result in a net increase of 17 child and adult deaths over 10 years. In FAA tests, some but not all commercially available automobile child restraint systems have provided adequate protection in tests simulating airplane accidents. Prices range from less than $100 for a child safety seat marketed for use in both automobiles and airplanes to as much as $1,300 for a child safety seat developed specifically for use in airplanes. A drawback to having parents, rather than airlines, provide child safety seats for air travel is that some models may be more difficult to fit into airplane seat belts, making a proper fit more challenging. While the performance of standardized airline-provided seats may be better than that of varied FAA-certified auto-airplane seats, one airline said that providing seats could present logistical problems for them. However, Virgin Atlantic Airlines supplies its own specially developed seats and prohibits parents from using their own child seats. Because turbulence can be a more frequent danger to unrestrained children than accidents, one expert told us that a compromise solution might include allowing some type of alternative in-flight restraint. Child safety seats are currently available for use on aircraft. The technical issues involved in designing and manufacturing safe seats for children to use in both cars and airplanes have largely been solved, according to FAA policy officials and FAA researchers. Federal regulations establish requirements for child safety seats designed for use in both highway vehicles and aircraft by children weighing up to 50 pounds. FAA officials explained that regulations requiring child safety seats have been delayed, in part, because of public policy concerns that parents would drive rather than fly if they were required to buy seats for their children. On February 18, 1998, FAA asked for comments on an advanced notice of proposed rule- making to require the use of child safety seats for children under the age of 2. FAA sponsored a conference in December 1999 to examine child restraint systems. At that conference, the FAA Administrator said the agency would mandate child safety seats in aircraft and provide children with the same level of safety as adults. FAA officials told us that they are still considering requiring the use of child safety seats but have not made a final decision to do so. If FAA does decide to provide “one level of safety” for adults and children, as NTSB advocates, parents may opt to drive to their destinations to avoid higher travel costs, thereby statistically exposing themselves and their children to more danger. In addition, FAA will have to decide whether the parents or airlines will provide the seats. If FAA decides to require child safety seats, it will need to harmonize its requirements with those of other countries where requirements differ, as the regulations on child restraint systems vary. In Canada, as in the United States, child safety seats are not mandatory on registered aircraft. In Europe, the regulations vary from country to country, but no country requires their use. Australia’s policy permits belly belts but discourages their use. An Australian official said in 1999 that Australia was waiting for the United States to develop a policy in this area and would probably follow that policy. Lap belts with inflatable air bags are designed to reduce the injuries or death that may result when a passenger’s head strikes the airplane interior. These inflatable seat belts adapt advanced automobile air bag technology to airplane seats in the form of seat belts with embedded air bags. If a passenger loses consciousness because of a head injury in an accident, even a minor, nonfatal concussion can cause death if the airplane is burning and the passenger cannot evacuate quickly. Slowing the duration of the impact with an air bag lessens its lethality. According to a manufacturer’s tests using airplane seats on crash sleds, lap belts with air bags can likely reduce some impact injuries to survivable levels. FAA does not require seats to be tested in sled tests for head impact protection when there would be “no impact” with another seat row or bulkhead wall, such as when spacing is increased to 42 inches from the more typical 35 inches. While more closely spaced economy class seat rows can provide head impact protection through energy-absorbing seat backs, seats in no impact positions have tested poorly in head injury experiments, resulting in severe head strikes against the occupants’ legs or the floor, according to the manufacturer. This no impact exemption from FAA’s head injury criteria can include exit rows, business class seats, and seats behind bulkhead walls and could permit as many as 30 percent of seats in some airplanes to be exempt from the head impact safety criteria that row-to-row seats must meet. According to the manufacturer, 13 airlines have installed about 1,000 of the devices in commercial airliners, mainly at bulkhead seats; about 200 of these are installed in the U.S. fleet. All of the orders and installations so far have been done to meet FAA’s seat safety regulations rather than for marketing reasons, according to the manufacturer. The airlines would appear to benefit from using the devices in bulkhead seats if that would allow them to install additional rows of seats. While the amount of additional revenue would depend on the airplane design and class of seating, two additional seats may produce more net revenue per year than the cost for the devices to be installed throughout an aircraft.Economic constraints are acquisition costs, maintenance costs, and increased fuel costs due to weight. The units currently weigh about 3 pounds per seat, or 2 pounds more than current seat belts. According to the manufacturer, the air bag lap belts currently cost $950 to $1,100, including maintenance. The manufacturer estimated that if 5 percent of all U.S. seat positions were equipped with the devices (about 50,000 seats per year), the cost would drop to about $300 to $600 per seat, including installation. Lap belt air bags have been commercially available for only a few years. FAA’s Civil Aerospace Medical Institute assisted the developers of the devices; manufacturers for both passenger and military use (primarily helicopter) are conducting ongoing research. FAA and other regulatory bodies have no plans to require their installation, but airlines are allowed to use them. The extent to which these devices are installed will depend on each airline’s analysis of the cost and benefits. This appendix presents information on the background and status of potential advancements in fire safety that we identified, including the following: preventing fuel tank explosions with fuel tank inerting; preventing in-flight fires with arc fault circuit breakers; identifying in-flight fires with multisensor fire and smoke detectors; suppressing in-flight and postcrash fires by using water mist fire mitigating postcrash damage and injury by using less flammable fuels; mitigating in-flight and postcrash fires by using fire-resistant thermal mitigating fire-related deaths and injuries by using ultra-fire-resistant mitigating fire deaths and injuries with sufficient airport rescue and fire fighting. Fuel tank inerting involves pumping nitrogen-enriched air into an airliner’s fuel tanks to reduce the concentration of oxygen to a level that will not support combustion. Nitrogen gas makes a fuel tank safer by serving as a fire suppressant. The process can be performed with both ground-based and onboard systems, and it significantly reduces the flammability of the center wing tanks, thereby lowering the likelihood of a fuel tank explosion. Following the crash of TWA Flight 800 in 1996, in which 230 people died, NTSB determined that the probable cause of the accident was an explosion in the center wing fuel tank. The explosion resulted from the ignition of flammable fuel vapors in this tank, which is located in the fuselage in the space between the wing junctions. NTSB subsequently placed the improvement of fuel tank design on its list of “Most Wanted Safety Improvements” and recommended that fuel tank inerting be considered an option to eliminate the likelihood of fuel tank explosions. FAA issued Special Federal Aviation Regulation 88 to eliminate or minimize the likelihood of ignition sources by revisiting the fuel tank’s design. Issued in 2001, the regulation consists of a series of FAA regulatory actions aimed at preventing the failure of fuel pumps and pump motors, fuel gauges, and electrical power wires inside these fuel tanks. In late 2002, FAA amended the regulation to allow for an “equivalent level of safety” and the use of inerting as part of an alternate means of compliance. In a 2001 report, an Aviation Rule-making Advisory Committee tasked with evaluating the benefits of inerting the center wing fuel tank estimated these benefits in terms of lives saved. After projecting possible in-flight and ground fuel tank explosions and postcrash fires from 2005 through 2020, the committee estimated that 132 lives might be saved from a ground-based system and 253 lives might be saved from an onboard system. Neither of the two major types of fuel tank inerting—ground-based and onboard—is currently available for use on commercial airliners because additional development is needed.Both types offer benefits and drawbacks. A ground-based system sends a small amount of nitrogen into the center wing tank before departure. Its benefits include that (1) it requires no new technology development for installation, (2) the tank can be inerted in 20 minutes, and (3) it carries a lesser weight penalty. Its drawbacks include that it is unable to inert for descent, landing, and taxiing to the destination gate, and nitrogen supply systems are needed at each terminal gate and remote parking area at every airport. An onboard system generates nitrogen by transferring some of the engine bleed air – air extracted from the jet engines to supply the cabin pressurization system in normal flight—through a module that separates air into oxygen and nitrogen and discharges the nitrogen enriched air into the fuel tank. Its benefits include that (1) it is self-reliant and (2) it significantly reduces an airplane’s vulnerability to lightning, static electricity, and incendiary projectiles throughout the flight’s duration.Its drawbacks include that it (1) weighs more, (2) increases the aircraft’s operating costs, and (3) may decrease the aircraft’s reliability. According to FAA, its fire safety experts’ efforts to develop a lighter-weight system for center wing tank inerting have significantly increased the industry’s involvement. Boeing and Airbus are working on programs to test inerting systems in flight. For example, Boeing has recently completed a flight test program with a prototype system on a 747. None of the U.S. commercial fleet is equipped with either ground-based or onboard inerting systems, though onboard systems are in use in U.S. and European military aircraft. Companies working in this field are focused on developing new inerting technologies or modifying existing ones. A European consortium is developing a system that combines onboard center wing fuel tank inerting with sensors and a water-mist-plus-nitrogen fire suppression system for commercial airplanes. In late 2002, FAA researchers successfully ground-tested a prototype onboard inerting system using current technology on a Boeing 747SP. New research also enabled the agency to ease a design requirement, making the inerting technology more cost-effective. This new research showed that reducing the oxygen level in the fuel tank to 12 percent—rather than 9 percent, as was previously thought—is sufficient to prevent fuel tank explosions in civilian aircraft. FAA also developed a system that did not need the compressors that some had considered necessary. Together, these findings allowed for reductions in the size and power demands of the system. FAA plans to focus further development on the more practical and cost- effective onboard fuel tank inerting systems. For example, to further improve their cost-effectiveness, the systems could be designed both to suppress in-flight cargo fires, thereby allowing them to replace Halon extinguishing agents, and to generate oxygen for emergency depressurizations, thereby allowing them to replace stored oxygen or chemical oxygen generators. NASA is also conducting longer-term research on advanced technology onboard inert gas-generating systems and onboard oxygen-generating systems. Its research is intended (1) to develop the technology to improve its efficiency, weight, and reliability and (2) to make the technology practical for commercial air transport. NASA will fund the development of emerging technologies for ground-based technology demonstration in fiscal year 2004. NASA is also considering the extension of civilian transport inerting technology to all fuel tanks to help protect airplanes against terrorist acts during approaches and departures. The cost of the system, its corresponding weight, and its unknown reliability are the most significant factors affecting the potential use of center wing fuel tank inerting. New cost and weight estimates are anticipated in 2003. In 2001, FAA estimated total costs to equip the worldwide fleet at $9.9 billion for ground-based, and $20.8 billion for onboard, inerting systems. In 2002, FAA officials developed an onboard system for B-747 flight- testing. The estimated cost was $460,000. The officials estimated that each system after that would cost about $200,000. The weight of the FAA prototype system is 160 pounds. A year earlier, NASA estimated the weight for a B-777 system with technology in use in military aircraft at about 550 pounds. Arcing faults in wiring may provide an ignition source that can start fires. Electrical wiring that is sufficiently damaged might cause arcing or direct shorting resulting in smoking, overheating, or ignition of neighboring materials. A review of data produced by FAA, the Airline Pilots Association, and Boeing showed that electrical systems have been a factor in approximately 50 percent of all aircraft occurrences involving smoke or fire and that wiring has been implicated in about 10 percent of those occurrences. In addition, faulty or malfunctioning wiring has been a factor in at least 15 accidents or incidents investigated by NTSB since 1983. Properly selecting, routing, clamping, tying, replacing, marking, separating, and cleaning around wiring areas and proper maintenance all help mitigate the potential for wire system failures, such as arcing, that could lead to smoke, fire and loss of function. Chemical degradation, age induced cracking, and damage due to maintenance may all create a scenario which could lead to arcing. Arcing can occur between a wire and structure or between different wire types. Wire chafing is a sign of degradation; chafing happens when the insulation around one wire rubs against a component tougher than itself (such as structure or control cable) exposing the wire conductor. This condition can lead to arcing. When arcing wires are too close to flammable materials or are flammable themselves, fires can occur. In general, wiring and wiring insulation degrade for a variety of reasons, including age, inadequate maintenance, chemical contamination, improper installation or repair, and mechanical damage. Vibration, moisture, and heat can contribute to and accelerate degradation. Consequences of wire systems failures include loss of function, smoke, and fire. Since most wiring is bundled and located in hidden or inaccessible areas, it is difficult to monitor the health of an aircraft’s wiring system during scheduled maintenance using existing equipment and procedures. Failure occurrences have been documented in wiring running to the fuel tank, in the electronics equipment compartment, in the cockpit, in the ceiling of the cabin, and in other locations. To address the concerns with arcing, arc fault circuit breakers for aircraft use are being developed. The arc fault circuit breaker cuts power off as it senses a wire beginning to arc. It is intended to prevent significant damage before a failure develops into a full-blown arc, which can produce extremely localized heat, char insulation, and generally create problems in the wire bundles. Arc fault circuit protection devices would mitigate arcing events, but will not identify the wire breaches and degradation that typically lead up to these events. FAA, the Navy, and the Air Force are jointly developing arc fault circuit breaker technology. Boeing is also developing a monitoring system to detect the status of and changes in wiring; and power shuts down when arcing is detected. This system may be able to protect wiring against both electrical overheating and arcing and is considered more advanced than the government’s circuit breaker technology. FAA developed a plan called the Enhanced Airworthiness Program for Airplane Systems to address wiring problems, which includes development of arc fault circuit breaker technology and installation guidance along with proposals of new regulations. The plan provides means for enhancing safety in the areas of wire system design, certification, maintenance, research and development, reporting, and information sharing and outreach. FAA also tasked an Aging Transport Systems Rule-making Advisory Committee to provide data, recommendations, and evaluation specifically on aging wiring systems. The new regulations being considered are entitled the Enhanced Airworthiness Program for Airplane Systems Rule and are expected by late-2005. Under this rule-making package, inspections would evaluate the health of wiring and all of its components for operation, such as connectors and clamps. Part of the system includes visual inspections of all wiring within arm’s reach, enhanced by the use of hand-held mirrors. This improvement is expected to catch more wiring flaws than current visual inspection practices. Where visual inspections can not be assumed to detect damage, detailed inspections will be required. The logic process to establish proper inspections is called the Enhanced Zonal Analysis Procedure, which will be issued as an Advisory Circular. This procedure is specifically directed towards enhancing the maintenance programs for aircraft whose current program does not include tasks derived from a process that specifically considers wiring in all zones as the potential source of ignition of a fire. Additional development and testing will be required before advanced arc fault circuit breakers will be available for use on aircraft. The FAA currently is in the midst of a prototype program where arc fault circuit breakers are installed in an anticollision light system on a major air carrier’s Boeing 737. The FAA and the Navy are currently analyzing tests of the circuit breakers to assess their reliability. The Society of Automotive Engineers is in the final stages of developing a Minimum Operating Performance Specification for the arc fault circuit breaker. Multisensor detectors, or “electronic noses,” could combine one or more standard smoke detector technologies; a variety of sensors for detecting such gases as carbon monoxide, carbon dioxide, or hydrocarbon; and a thermal sensor to more accurately detect and locate overheated or burning materials. The sensors could improve existing fire detection by discovering and locating potential or actual fires sooner and reducing the incidence of false alarms. These “smart” sensors would ignore the “nuisance sources” such as dirt, dust, and condensation that are often responsible for triggering false alarms in existing systems. According to studies by FAA and the National Institute of Standards and Technology, many current smoke and fire detection systems are not reliable. A 2000 FAA study indicated that cargo compartment detection systems, for example, resulted in at least one false alarm per week from 1988 through 1990 and a 200:1 ratio of false alarms to actual fires in the cargo compartment from 1995 through 1999. FAA has since estimated a 100:1 cargo compartment false alarm ratio, partly because reported actual incidents have increased According to FAA’s Service Difficulty Report database,about 990 actual smoke and fire events were reported for 2001. Multisensor detectors could be wired or wireless and linked to a suppression system. One or several sensor signals or indicators could cause the crew to activate fire extinguishers in a small area or zone, a larger area, or an entire compartment, resulting in a more appropriate and accurate use of the fire suppressant. For example, in areas such as the avionics compartment, materials that can burn are relatively well-defined. Multisensor detectors the size of a postage stamp could be designed to detect smoldering fires in cables or insulation or in overheated equipment in that area. Placing the detectors elsewhere in the airplane could improve the crew’s ability to respond to smoke or fire, including occurrences in hidden or inaccessible areas. Improved sensor detection technologies would both enhance safety by increasing crews’ confidence in the reliability of alarms and reduce costs by avoiding the need to divert aircraft in response to false alarms. One study estimated the average cost of a diversion at $50,000 for a wide-body airplane and $30,000 for a narrow-body airplane. A diversion can also present safety concerns because of the possible increased risk of an accident and injuries to passengers and crew if there is (1) an emergency evacuation, (2) a landing at an unfamiliar airport, (3) a change to air traffic patterns, (4) a shorter runway, (5) inferior fire-fighting capability, (6) a loss of cargo load, or (7) inferior navigation aids. In 2002, 258 unscheduled landings due to smoke, fire, or fumes occurred. In addition, 342 flights were interrupted; some of these flights had to return to the gate or abort a takeoff. FAA established basic detector performance requirements in 1965 and 1980. Detectors were to be made and installed in a manner that ensured their ability to resist, without failure, all vibration, inertia, and other loads to which they might normally be subjected; they also had to be unaffected by exposure to fumes, oil, water, or other fluids. Regulations in 1986 and 1998 further defined basic location and performance requirements for detectors in different areas of the cargo compartment. In 1998, FAA issued a requirement for detection and extinguishment systems for one class of cargo compartments, which relied on oxygen starvation to control fires. This requirement significantly increased the number of detectors in use. Multisenor detectors are not currently available because additional research is needed. Although they have been demonstrated in the laboratory and on the ground, they have not been flight-tested. FAA and NASA have multisensor detector research and development efforts under way and are working to develop “smart” sensors and criteria for their approval. FAA will also finish revising an Advisory Circular that establishes test criteria for detection systems, designed to ensure that they respond to fires, but not to nonfire sources. In addition, several companies currently market “smart” detectors, mostly for nonaviation applications. For example, thermal detection systems sense and count certain particles that initially boil off the surface of smoldering or burning material. A European consortium has been developing a system, FIREDETEX, that combines the use of multisensor detectors, onboard fuel tank inerting, and water-mist-plus-nitrogen fire suppression systems for commercial airplanes. This program and associated studies are still ongoing and flight testing is planned for the last quarter of calendar year 2003. The results of tests on this system are expected to be made public in early 2004, and will help to clarify the possible costs, benefits, and drawbacks of the combined system. Additional research, development, and testing will be required before multisensor technology is ready for use in commercial aviation. NASA, FAA, and private companies are pursuing various approaches. Some experts believe that some forms of multisensor technology could be in use in 5 years. When these units become available, questions may arise about where their use will be required. For example, the Canadian Transportation Safety Board has recommended that some areas in addition to those currently designated as fire zones may need to be equipped with detectors. These include the electronics and equipment bay (typically below the floor beneath the cockpit and in front of the passenger cabin), areas behind interior wall panels in the cockpit and cabin areas, and areas behind circuit breaker and other electronic panels. For over two decades, the aviation industry has evaluated the use of systems that spray water mist to suppress fires in airliner cabins, cargo compartments, and engine casings (see fig. 7). This effort was prompted, in part, by a need to identify an alternative to Halon, the primary chemical used to extinguish fires aboard airliners. With few exceptions, Halon is the sole fire suppressant installed in today’s aircraft fire suppression systems. However, the production of Halon was banned under the 1987 Montreal Protocol on Substances that Deplete the Ozone Layer, and its use in many noncritical sectors has been phased out. Significant reserves of Halon remain, and its use is still allowed in certain “critical use” applications, such as aerospace, because no immediate viable replacement agent exists. To enable the testing and further development of suitable alternatives to and substitutes for Halon, FAA has drafted detailed standards for replacements in the cargo and engine compartments. These standards typically require replacement systems to provide the same level of safety as the currently used Halon extinguishing system. According to FAA and others in the aviation industry, successful water mist systems could provide benefits against an in-flight or postcrash fire, including cooling the passengers, cabin surfaces, furnishings and overall cabin decreasing toxic smoke and irritant gases; and delaying or preventing “flashover” fires from occurring. In addition, a 1996 study prepared for the British Civil Aviation Authority examined 42 accidents and 32 survivability factors and found that cabin water spray was the factor that showed the greatest potential for reducing fatality and injury rates. In the early 1990s, a joint FAA and Civil Aviation Authority study found that cabin water mist systems would be highly effective in improving survivability during a postcrash fire. However, the cost of using these systems outweighed the benefits, largely because of the weight of the water that airliners would be required to carry to operate them. In the mid- and late-1990s, FAA and others began examining water mist systems in airliner cargo compartments to help offset the cost of a cabin water mist system because the water could be used or shared by both the cargo compartment and the cabin. European and U.S. researchers also designed systems that required much less water because they targeted specific zones within an aircraft to suppress fires rather than spraying water throughout the cabin or the cargo compartment. In 2000, Navy researchers found a twin-fluid system to be highly reliable and maintenance-free. Moreover, this system’s delivery nozzles could be installed without otherwise changing cabin interiors. The Navy researchers’ report recommended that FAA and NTSB perform follow-up testing leading to the final design and certification of an interior water mist fire suppression system for all passenger and cargo transport aircraft. Also in 2000, a European consortium began a collaborative research project called FIREDETEX, which combines multisensor fire detectors, water mist, and onboard fuel tank inerting into one fire detection and suppression system. In 2001 and 2002, FAA tested experimental mist systems to determine what could meet its preliminary minimum performance standards for cargo compartment suppression systems. A system that combines water mist with nitrogen met these minimum standards. In this system, water and nitrogen “knock down” the initial fire, and nitrogen suppresses any deep- seated residual fire by inerting the entire compartment. In cargo compartment testing, this system maintained cooler temperatures than had either a plain water mist system or a Halon-based system. Additional research and testing are needed before water mist technology can be considered for commercial aircraft. For example, the weight and relative effectiveness of any water mist system would need to be considered and evaluated. In addition, before it could be used in aircraft, the consequences of using water will need to be further evaluated. For example, Boeing officials noted that using a water mist fire suppression system in the cabin in a post crash fire might actually reduce passenger safety if the mist or fog creates confusion among the passengers, leading to longer evacuation times. Further, of concern is the possible shorting of electrical wiring and equipment and damage to aircraft interiors (e.g., seats, entertainment equipment, and insulation). Water cleanup could also be difficult and require special drying equipment. Burning fuel typically dominates and often overwhelms postcrash fire scenarios and causes even the most fire-resistant materials to burn. Fuel spilled from tanks ruptured upon crash impact often forms an easily ignitable fuel-air mixture. A more frequent fuel-related problem is the fuel tank explosion, in which a volatile fuel-air mixture inside the fuel tank is ignited, often by an unknown source. For example, it is believed that fuel tank explosions destroyed a Philippines Air 737 in 1990, TWA Flight 800 in 1996, and a Thai Air 737 in 2001. Therefore, reducing the flammability of fuel could improve survivability in postcrash fires as well as reduce the occurrence of fuel tank explosions. Reducing fuel flammability involves limiting the volatility of fuel and the rate at which it vaporizes.Liquid fuel can burn only when enough fuel vapor is mixed with air. If the fuel cannot vaporize, a fire cannot occur. This principle is behind the development of higher-flashpoint fuel, whose use can decrease the likelihood of a fuel tank explosion. The flash point is the lowest temperature at which a liquid fuel produces enough vapor to ignite in the presence of a source of ignition—the lower the flash point, the greater the risk of fire. If the fuel is volatile enough, however, and air is sucked into the fuel tank area upon crash impact, limiting the fuel’s vaporization can prevent a burnable mixture from forming. This principle supports the use of additives that modify the viscosity of fuel to limit postcrash fires; for example, antimisting kerosene contains such additives. According to FAA and NASA, however, these additives do nothing to prevent fuel tank explosions. From the early 1960s to the mid-1980s, FAA conducted research on fuel safety. The Aviation Safety Act of 1988 required that FAA undertake research on low-flammability aircraft fuels, and, in 1993, FAA developed plans for fuel safety research. In 1996, a National Research Council experts’ workshop on aviation fuel summarized existing fuel safety research efforts. The participants concluded that although postcrash fuel-fed aircraft fires had been researched, limited progress had been achieved and little work had been published. As part of FAA’s research, fuels have been modified with thickening polymer additives to slow down vaporization in crashes. Participants in the 1996 National Research Council workshop identified several long-term research goals for consideration in developing modified fuels and fuel additives to improve fire safety. They also agreed that a combination of effective fire-safe fuel additives could probably be either selected or designed, provided that fuel performance requirements were identified in advance. In addition, they agreed that existing aircraft designs that reduce the chance of fuel igniting do not present major barriers to the implementation of a fire-safe fuel. A 1996 European Transport Safety Council report suggested that antimisting kerosene be at least partially tested on regular military transport flights (e.g., in one tank, feeding one engine) to demonstrate its operational compatibility. The report also recommended the consideration of a study comparing the costs of the current principal commercial fuel and the special, higher-flashpoint fuel used by the Navy. According to NASA and FAA fire-safe fuels experts, military fuel is much harder to burn in storage or to ignite in a pan because of its lower volatility; however, it is just as flammable as aviation fuel when it is sprayed into an engine combustor. Fire-safe fuels are not currently available and are in the early stages of research and development. In January 2002, NASA opened a fire-safe fuels research branch at its Glenn Research Center in Ohio. NASA-Glenn is conducting aviation fuel research that evaluates fuel vapor flammability in conjunction with FAA’s fuel tank inerting program, including the measurement of fuel “flash points.” NASA is examining the effects of surfactants, gelling agents, and chemical composition changes on the vaporization and pressure characteristics of jet fuel. In addition to FAA’s and NASA’s research, some university and industry researchers have made progress in developing fire-safe fuels. Many use advanced analytical, computational modeling technologies to inform their research. A council of producers and users of fuels is also coordinating research on ways to use such fuels. NASA fuel experts remain optimistic that small changes in fuel technologies can have a big impact on fuel safety. Developing fire-safe fuels will require much more research and testing. There are significant technical difficulties associated with creating a fuel that meets aviation requirements while meaningfully decreasing the flammability of the fuel. To keep an airplane quieter and warmer, a layer of thermal acoustic insulation material is connected to paneling and walls throughout the aircraft. This insulation, if properly designed, can also prevent or limit the spread of an in-flight fire. In addition, thermal acoustic insulation provides a barrier against a fire burning through the cabin from outside the airplane’s fuselage (See fig. 8.). Such a fire, often called a postcrash fire, may occur when fuel is spilled on the ground after a crash or an impact. While this thermal acoustic insulation material could help prevent the spread of fire, some of the insulation materials that have been used in the past have contributed to fires. For example, FAA indicated that an insulation material, called metallized Mylar®, contributed to at least six in- flight fires. Airlines have stopped using this material and are removing it from existing aircraft. FAA’s two main efforts in this area are directed toward preventing fatal in- flight fire and improving postcrash fire survivability. Since 1998, FAA has been developing test standards for preventing in- flight fires in response to findings that fire spread on some thermal acoustic insulation blanket materials. In 2000, FAA issued a notice of proposed rule-making that outlined new flammability test criteria for in- flight fires. FAA’s in-flight test standards require thermal acoustic insulation materials to protect passengers. According to the standards, insulation materials installed in airplanes will not propagate a fire if ignition occurs. FAA is also developing more stringent burnthrough test standards for postcrash fires. FAA has been studying the penetration of the fuselage by an external fire—known as fuselage burnthrough—since the late 1980s and believes that improving the fire resistance of thermal acoustic insulation could delay fuselage burnthrough. In laboratory tests conducted from 1999 through 2002, an FAA-led working group determined that insulation is the most potentially effective and practical means of delaying the spread of fire or creating a barrier to burnthrough. In 2002, FAA completed draft burnthrough standards outlining a proposed methodology for testing thermal acoustic insulation. The burnthrough standards would protect passengers and crews by extending by at least 4 minutes the time available for evacuation in a postcrash fire. Various studies have estimated the potential benefits from both test standards: A 1999 study of worldwide aviation accidents from 1966 through 1993 estimated that about 10 lives per year would have been saved if protection had provided an additional 4 minutes for occupants to exit the airplane. A 2000 FAA study estimated that about 37 U.S. fatalities would be avoided between 2000 and 2019 through the implementation of both proposed standards. A 2002 study by the British Civil Aviation Authority of worldwide aviation accidents from 1991 through 2000 estimated that at least 34 lives per year would have been saved if insulation had met both proposed standards. Insulation designed to replace metallized Mylar® is currently available. A 2000 FAA airworthiness directive gave the airlines 5 years to remove and replace metallized Mylar® insulation in 719 affected airplanes. Replacement insulation is required to meet the in-flight standard and will be installed in these airplanes by mid-2005. In that airworthiness directive, FAA indicated that it did not consider other currently installed insulation to constitute an unsafe condition. Thermal acoustic insulation is currently available for installation on commercial airliners. This insulation has been demonstrated to reduce the chance of fatal in-flight fires and to improve postcrash fire survivability. On July 31, 2003, FAA issued a final rule requiring that after September 2, 2005, all newly manufactured airplanes having a seating capacity of more than 20 passengers or over 6,000 pounds must use thermal acoustic insulation that meets more stringent standards for how quickly flames can spread. In addition, for aircraft of this size manufactured before September 2, 2003, replacement insulation in the fuselage must also meet the new, higher standard. Research is continuing to develop thermal acoustic insulation that provides better in-flight and burnthrough protection. Even when this material is available, the high cost of retrofitting airplanes may limit its use to newly manufactured aircraft. For example, FAA estimates that the metallized Mylar® retrofit alone will cost a total of $368.4 million, discounted to present value terms, for the 719 affected airplanes. Because thermal acoustic insulation is installed throughout the pressurized section of the airplane for the life of its service, retrofitting the entire fleet would cost several billion dollars. Polymers are used in aircraft in the form of lightweight plastics and composites and are selected on the basis of their estimated installed cost, weight, strength, and durability. Most of the aircraft cabin is made of polymeric material. In the event of an in-flight or a postcrash fire, the use of polymeric materials with reduced flammability could give passengers and crew more time to evacuate by delaying the rate at which the fire spreads through the cabin. FAA researchers are developing better techniques to measure the flammability of polymers and to make polymers that are ultra fire resistant. Developing these materials is the long-term goal of FAA’s Fire Research Program, which, if successful, will “eliminate burning cabin materials as a cause of death in aircraft accidents.” Materials being improved include composite and adhesive resins, textile fibers, rubber for seat cushions, and plastics for molded parts used in seats and passenger electronics. (See fig. 9.) Adding flame-retardant substances to existing materials is one way to decrease their flammability. For example, some manufacturers add substances that release water when they reach a high temperature. When a material, such as wiring insulation, is heated or burns, the water acts to absorb the heat and cools down the fire. Other materials are designed to become surface-scorched on exposure to fire, causing a layer of char to protect the rest of the material from burning. Lastly, adding a type of clay can have a flame-retardant effect. In general, these fire-retardant polymers are formulated to pass an ignition test but do not meet FAA’s criterion for ultra fire resistance, which is a 90 percent reduction in the rate at which the untreated material would burn. To meet this strict requirement, FAA is developing new “smart” polymers that are typical plastics under normal conditions but convert to ultra-fire-resistant materials when exposed to an ignition source or fire. FAA has adopted a number of flammability standards over the last 30 years. In 1984, FAA passed a retrofit rule that replaced 650,000 seat cushions with flame-retardant seat cushions at a total cost of about $75 million. The replacement seat cushions were found to delay cabin flashover by 40 to 60 seconds. Fire-retardant seat cushions can also prevent ramp and in-flight fires that originate at a seat and would otherwise burn out of control if left unattended. In 1986 and 1988, FAA set maximum allowable levels of heat and smoke from burning interior materials to decrease the amount of smoke that they would release in a postcrash fire. These standards affected paneling in all newly manufactured aircraft. Airlines and airframe manufacturers invested several hundred million dollars to develop these new panels. Ultra-fireresistant polymers are not currently available for use on commercial airliners. These polymers are still in the early stages of research and development. To reduce the cost and simplify the testing of new materials, FAA is employing a new technique to characterize the flammability and thermal decomposition of new products; this technique requires only a milligram of sample material. The result has been the discovery of several new compositions of matter (including “smart” polymers). The test identifies key thermal and combustion properties that allow rapid screening of new materials. From these materials, FAA plans to select the most promising and work with industry to make enough of the new polymers to fabricate full-scale cabin components like sidewalls and stowage bins for fire testing. FAA’s phased research program includes the selection in 2003 of a small number of resins, plastics, rubbers, and fibers on the basis of their functionality, cost, and potential to meet fire performance guidelines. In 2005, FAA plans to fabricate decorative panels, molded parts, seat cushions, and textiles for testing from 2007 through 2010. Full-scale testing is scheduled for 2011 but is contingent upon the availability of program funds and commercial interest from the private sector. Research continues on ultra-fire-resistant polymers that will increase protection against in-flight fires and cabin burnthrough. According to an FAA fire research expert, issues facing this research include (1) the current high cost of ultra-fire-resistant polymers; (2) difficulties in producing ultra- fire-resistant polymers with low to moderate processing temperatures, good strength and toughness, and colorability and colorfastness; and (3) gaps in understanding the relationship between material properties and fire performance and between chemical composition and fire performance, scaling relationships, and fundamental fire-resistance mechanisms. In addition, once the materials are developed and tested, getting them produced economically and installed in aircraft will become an issue. It is expected that such new materials with ultra fire resistance would be more expensive to produce and that the market for such materials would be uncertain. Because of the fire danger following a commercial airplane crash, having airport rescue and fire-fighting operations available can improve the chances of survival for the people involved. Most airplane accidents occur during takeoff or landing at the airport or in the surrounding community. A fire outside the airplane, with its tremendous heat, may take only a few minutes to burn through the airplane’s outside shell. According to FAA, firefighters are responsible for creating an escape path by spraying water and chemicals on the fire to allow the passengers and crew to evacuate the airplane. Firefighters use one or more trucks to extinguish external fires, often at great personal risk, and use hand-held attack lines when attempting to put out fires within the airplane fuselage. (See fig. 10). Fires within the fuselage are considered difficult to control with existing equipment and procedures because they involve complex conditions, such as smoke-laden toxic gases and high temperatures in the passenger cabin. FAA has taken actions to control both internal and external postcrash fires, including requiring major airports to have airport rescue and fire-fighting operations. In 1972, FAA first proposed regulations to ensure that major airports have a minimal level of airport rescue and fire-fighting operations. Some changes to these regulations were made in 1988. The regulations establish, among other things, equipment standards, annual testing requirements for response times, and operating procedures. The requirements depend on both the size of the airport and the resources the locality has agreed to make available as needed. In 1997, FAA compared airport rescue and fire-fighting missions and standards for civilian airports with DOD’s for defense installations and reported that DOD’s requirements were not applicable to civilian airports. In 1988, and again in 1998, Transport Canada Civil Aviation also studied its rescue and fire-fighting operations and concluded that the expenditure of resources for such unlikely occurrences was difficult to justify from a benefit-cost perspective. This conclusion highlighted the conflict between safety and cost in attempting to define rescue and fire-fighting requirements. A coalition of union organizations and others concerned about aviation safety released a report critical of FAA’s standards and operational regulations in 1999. According to the report, FAA’s airport rescue and fire- fighting regulations were outdated and inadequate. In 2002, FAA incorporated measures recommended by NTSB into FAA’s Aeronautical Information Manual Official Guide to Basic Flight Information and Air Traffic Control Procedures. These measures (1) designate a radio frequency at most major airports to allow direct communication between airport rescue and fire-fighting personnel and flight crewmembers in the event of an emergency and (2) specify a universal set of hand signals for use when radio communication is lost. In March 2001, FAA responded to the reports criticizing its airport rescue and fire-fighting standards by tasking its Aviation Regulatory Advisory Committee to review the agency’s rescue and fire-fighting requirements to identify measures that could be added, modified, or deleted. In 2003, the committee is expected to propose requirements for the number of trucks, the amount of fire extinguishing agent, vehicle response times, and staffing at airports and to publish its findings in a notice of proposed rule-making. Depending on the results of this FAA review, additional resources may be needed at some airports. The overall cost of improving airport rescue and fire-fighting response capabilities could be a significant barrier to the further development of regulations. For example, some in the aviation industry are concerned about the costs of extending requirements to smaller airports and of appropriately equipping all airports with resources. According to FAA, extending federal safety requirements to some smaller airports would cost at least $2 million at each airport initially and $1 million annually thereafter. This appendix presents information on the background and status of potential advancements in evacuation safety that we identified, including the following: improved passenger safety briefings; exit seat briefings; photo-luminescent floor track marking; crewmember safety and evacuation training; acoustic attraction signals; exit slide testing; overwing exit doors; evacuation procedures for very large transport aircraft; and personal flotation devices. Federal regulations require that passengers receive an oral briefing prior to takeoff on safety aspects of the upcoming flight. FAA also requires that oral briefings be supplemented with printed safety briefing cards that pertain only to that make and model of airplane and are consistent with the air carrier’s procedures. These two safety measures must include information on smoking, the location and operation of emergency exits, seat belts, compliance with signs, and the location and use of flotation devices. In addition, if the flight operates above 25,000 feet mean sea level, the briefing and cards must include information on the emergency use of oxygen. FAA published an Advisory Circular in March 1991 to guide air carriers’ development of oral safety briefings and cards. Primarily, the circular defines the material that must be covered and suggests material that FAA believes should be covered. The circular also discusses the difficulty in motivating passengers to attend to the safety information and suggests making the oral briefing and safety cards as attractive and interesting as possible to increase passengers’ attention. The Advisory Circular suggests, for example, that flight attendants be animated, speak clearly and slowly, and maintain eye contact with the passengers. Multicolored safety cards with pictures and drawings should be used over black and white cards. Finally, the circular suggests the use of a recorded videotape briefing because it ensures a complete briefing with good diction and allows for additional visual information to be presented to the passengers. (See fig. 11.) Despite efforts to improve passengers’ attention to safety information, a large percentage of passengers continue to ignore preflight safety briefings and safety cards, according to a study NTSB conducted in 1999. Of 457 passengers polled, 54 percent (247) reported that they had not watched the entire briefing because they had seen it before. An additional 70 passengers indicated that the briefing provided common knowledge and therefore there was no need to watch it. Of 431 passengers who answered a question about whether they had read the safety card, 68 percent (293) indicated that they had not, many of them stating that they had read safety cards on previous flights. Safety briefings and cards serve an important safety purpose for both passengers and crew. They are intended to prepare passengers for an emergency by providing them with information about the location and operation of exits and emergency equipment that they may have to operate—and whose location and operation may differ from one airplane to the next. Well-briefed passengers will be better prepared in an emergency, thereby increasing their chances of surviving and lessening their dependence on the crew for assistance. In its emergency evacuation study, NTSB recommended that FAA instruct airlines to “conduct research and explore creative and effective methods that use state-of-the-art technology to convey safety information to passengers.” NTSB further recommended “the presented information include a demonstration of all emergency evacuation procedures, such as how to open the emergency exits and exit the aircraft, including how to use the slides.” That research found that passengers often view safety briefings and cards as uninteresting and the information as intuitive. FAA has requested that commercial carriers explore different ways to present the materials to their passengers, adding that more should be done to educate passengers about what to do after an accident has occurred. Passengers seated in an exit row may be called on to assist in an evacuation. Upon a crewmember’s command or a personal assessment of danger, these passengers must decide if their exit is safe to use and then open their exit hatch or door for use during an evacuation. In October 1990, FAA required airlines to actively screen passengers occupying exit seats for “suitability” and to administer one-on-one briefings on their responsibilities. This rule does not require specific training for exit seat occupants, but it does require that the occupants be duly informed of their distinct obligations. According to NTSB, preflight briefings of passengers in exit rows could contribute positively to a passenger evacuation. In a 1999 study, NTSB found that the individual briefings given to passengers who occupy exit seats have a positive effect on the outcome of an aircraft evacuation. The studies also found that as a result of the individualized briefings, flight attendants were better able to assess the suitability of the passengers seated in the exit seats. According to FAA’s Flight Standards Handbook Bulletin for Air Transportation, several U.S. airlines have identified specific cabin crewmembers to perform “structured personal conversations or briefings,” designed to equip and prepare passengers in exit seats beyond the general passenger briefing. Also, the majority of air carriers have procedures to assist crewmembers with screening passengers seated in exit rows. FAA’s 1990 rule requires that passengers seated in exit rows be provided with information cards that detail the actions to be taken in the case of an emergency. However, individual exit row briefings, such as those recommended by NTSB, are not required. Also included on the information cards are provisions for a passenger who does not wish to be seated in the exit row to be reseated. Additionally, carriers are required to assess the exit row passenger’s ability to carry out the required functions. The extent of discussion with exit row passengers depends on each airline’s policy. In June 1983, an Air Canada DC-9 flight from Dallas to Toronto was cruising at 33,000 feet when the crew reported a lavatory fire. An emergency was declared, and the aircraft made a successful emergency landing at the Cincinnati Northern Kentucky International Airport. The crew initiated an evacuation, but only half of the 46 persons aboard were able to escape before becoming overcome by smoke and fire. In its investigation of this accident, NTSB learned that many of the 23 passengers who died might have benefited from floor tracking lighting. As a result, NTSB recommended that airlines be equipped with floor-level escape markings. FAA determined that floor lighting could improve the evacuation rate by 20 percent under certain conditions, and FAA now requires all airliners to have a row of lights along the floor to guide passengers to the exit should visibility be reduced by smoke. On transport category aircraft, these escape markings, called floor proximity marking systems, typically consist primarily of small electric lights spaced at intervals on the floor or mounted on the seat assemblies, along the aisle. The requirement for electricity to power these systems has made them vulnerable to a variety of problems, including battery and wiring failures, burned-out light bulbs, and physical disruption caused by vibration, passenger traffic, galley cart strikes, and hull breakage in accidents. Attempts to overcome these problems have led to the proposal that nonelectric, photo-luminescent (glow-in-the-dark) materials be used in the construction of floor proximity marking systems. The elements of these new systems are “charged” by the normal airplane passenger cabin lighting, including the sunlight that enters the cabin when the window shades are open during daylight hours. (See fig. 12.) Floor track marking using photo-luminescent materials is currently available but not required for U.S. commercial airliners. Performance demonstrations of photo-luminescent technology have found that strontium aluminate photo-luminescent marking systems can be effective in providing the guidance for egress that floor proximity marking systems are intended to achieve. According to industry and government officials, such photo-luminescent marking systems are also cheaper to install than electric light systems and require little to no maintenance. Moreover, photo-luminescent technology weighs about 15 to 20 pounds less than electric light systems and, unlike the electric systems, illuminates both sides of the aisle, creating a pathway to the exits. Photo-luminescent floor track marking technology is mature and is currently being used by a small number of operators, mostly in Europe. In the United States, Southwest Airlines has equipped its entire fleet with the photo-luminescent system. However, the light emitted from photo- luminescent materials is not as bright as the light from electrically operated systems. Additionally, the photo-luminescent materials are not as effective when they have not been exposed to light for an extended period of time, as after a long overseas nighttime flight. The estimated retail price of an entire system, not including the installation costs, is $5,000 per airplane. FAA requires crewmembers to attend annual training to demonstrate their competency in emergency procedures. They have to be knowledgeable and efficient while exercising good judgment. Crewmembers must know their own duties and responsibilities during an evacuation and be familiar with those of their fellow workers so that they can take over for others if necessary. The requirements for emergency evacuation training and demonstrations were first established in 1965. Operators were required to conduct full- scale evacuation demonstrations, include crewmembers in the demonstrations, and complete the demonstrations in 2 minutes using 50 percent of the exits. The purpose of the demonstrations was to test the crewmembers’ ability to execute established emergency evacuation procedures and to ensure the realistic assignment of functions to the crew. A full-scale demonstration was required for each type and model of airplane when it first started passenger-carrying operations, increased its passenger seating capacity by 5 percent or more, or underwent a major change in the cabin interior that would affect an emergency evacuation. Subsequently, the time allowed to evacuate the cabin during these tests was reduced to 90 seconds. The aviation community took steps in the 1990s to develop a program called Crew Resource Management that focuses on overall improvements in crewmembers’ performance and flight safety strategies, including those for evacuation. FAA officials told us that they plan to emphasize the importance of effective communication between crewmembers and are considering updating a related Advisory Circular. Effective communication between cockpit and cabin crew are particularly important with the added security precautions being taken after September 11, 2001, including the locking the cockpit door during flight. The traditional training initiative now has an advanced curriculum, Advanced Crew Resource Management. According to FAA, this comprehensive implementation package includes crew resource management procedures, training for instructors and evaluators, training for crewmembers, a standardized assessment of the crew’s performance, and an ongoing implementation process. This advanced training was designed and developed through a collaborative effort between the airline and research communities. FAA considers training to be an ongoing development process that provides airlines with unique crew resource management solutions tailored to their operational demands. The design of crew resource management procedures is based on principles that require an emphasis on the airline’s specific operational environment. The procedures were developed to emphasize these crew resource management elements by incorporating them into standard operating procedures for normal as well as abnormal and emergency flight situations. Because commercial airliner accidents are rare, crewmembers must rely on their initial and recurrent training to guide their actions during an emergency. Even in light of advances and initiatives in evacuation technology, such as slides and slide life rafts, crewmembers must still assume a critical role in ensuring the safe evacuation of their passengers. Airline operators have indicated that it is very costly for them to pull large numbers of crewmembers off-line to participate in training sessions. FAA officials told us that improving flight and cabin crew communication holds promise for ensuring the evacuation of passengers during an emergency. To improve this communication and coordination between flight and cabin crew, FAA plans to update the related Advisory Circular, oversee training, and charge FAA inspectors with monitoring air carriers during flights to see that improvements are being implemented. In addition, FAA is enhancing its guidance to air carriers on preflight briefings for flight crews to sharpen their responses to emergency situations and mitigate passengers’ confusion. FAA expects this guidance to bolster the use and quality of preflight briefings between pilots and flight attendants on security, communication, and emergency procedures. According to FAA, these briefings have been shown to greatly improve the flight crew’s safety mind-set and to enhance communication. Acoustic attraction signals make sounds to help people locate the doors in smoke, darkness, or when lights and exit signs are obscured. When activated, the devices are intended to help people to determine the direction and approximate distance of the sound—and of the door. Examples of audio attraction signals include recorded speech sounds, broadband multifrequency sounds (“white noise”), or alarm bells. Research to determine if acoustic attraction signals can be useful in aircraft evacuation has included, for example, FAA’s Civil Aeromedical Institute testing of recorded speech sounds in varying pitches, using phrases such as “This way out,” “This way,” and “Exit here.” Researchers at the University of Leeds developed Localizer Directional Sound beacons, which combine broadband, multifrequency “white noise” of between 40Hz and 20kHz with an alerting sound of at least one other frequency, according to the inventor (see fig. 13). The FAA study noted above of acoustic attraction signals found that in the absence of recorded speech signals, the majority of participants evacuating a low-light-level, vision-obscured cabin will head for the front exit or will follow their neighbors. In contrast, participants exposed to recorded speech sounds will select additional exits, even those in the rear of the airplane. During aircraft trials conducted by Cranfield University and University of Greenwich researchers, tests of directional sound beacons found that under cabin smoke conditions, exits were used most efficiently when the cabin crew gave directions and the directional sound beacons were activated. With this combination, the distribution of passengers to the available exits was better than with cabin crew directions alone, sound beacons alone, no cabin crew directions, or no sound beacons. Researchers found that passengers were able to identify and move toward the closest sound source inside the airplane cabin and to distinguish between two closely spaced loudspeakers. However, in 2001, Airbus conducted several evacuation test trials of audio attraction signals using an A340 aircraft. According to Airbus, the acoustic attraction signals did not enhance passengers’ orientation, and, overall, did not contribute to passengers’ safety. While acoustic attraction signals are currently available, further research is needed to determine if their use is warranted on commercial airliners. FAA, Transport Canada Civil Aviation, and the British Civilian Aviation Authority do not currently mandate the use of acoustic attraction signals. The United Kingdom’s Air Accidents Investigation Branch made a recommendation after the fatal Boeing 737 accident at Manchester International Airport in 1985 that research be undertaken to assess the viability of audio attraction signals and other evacuation techniques to assist passengers impaired by smoke and toxic or irritant gases. The Civilian Aviation Authority accepted the recommendation and sponsored research at Cranfield University; however, it concluded from the research results that the likely benefit of the technology would be so small that no further action should be taken, and the recommendation was closed in 1992. The French Direction Generale de l’Aviation Civile funded aircraft evacuation trials using directional sound beacons in November 2002, with oversight by the European Joint Aviation Authorities. The trials were conducted at Cranfield University’s evacuation simulator with British Airways cabin crew and examined eight trial evacuations by two groups of ‘passengers.’ The study surveyed the participants’ views on various aspects of their evacuation experience and measured the overall time to evacuate. The speed of evacuation was found to be biased by the knowledge passengers’ gained in the four successive trials, and by variations in the number of passengers participating on the 2 days (155 and 181). The four trials by each of the two groups of passengers also involved different combinations of crew and sound in each. The study concluded that the insufficient number of test sessions further contributed to bias in the results, and that further research would be needed to determine whether the devices help to speed overall evacuation. Further research and testing are needed before acoustic attraction signals can be considered for widespread airline use. The signals may have drawbacks that would need to be addressed. For example, the Civil Aviation Authority found that placing an audio signal in the bulkhead might disorient or confuse the first few passengers who have to pass and then move away from the sound source to reach the exit. Such hesitation slowed passengers’ evacuation during testing. The researchers at Cranfield University trials in 1990 concluded that an acoustic sound signal did not improve evacuation times by a statistically significant amount, suggesting that the device might not be cost-effective. Smoke hoods are designed to provide the user with breathable, filtered air in an environment of smoke and toxic gases that would otherwise be incapacitating. More people die from smoke and toxic gases than from fire after an air crash. Because only a few breaths of the dense, toxic smoke typically found in aircraft fires can render passengers unconscious and prevent their evacuation, the wider use of smoke hoods has been investigated as a means of preventing passengers from being overcome by smoke and of giving them enough breathable air to evacuate. However, some studies have found that smoke hoods are only effective in certain types of fires and in some cases may slow the evacuation of cabin occupants. As shown in figure 14, a filter smoke hood can be a transparent bag worn over the head that fits snugly at the neck and is coated with fire-retardant material; it has a filter but no independent oxygen source and can provide breathable air by removing some toxic contaminants from the air for a period ranging from several minutes to 15 minutes, depending on the severity and type of air contamination. The hood has a filter to remove carbon monoxide—a main direct cause of death in fire-related commercial airplane accidents, as well as hydrogen cyanide—another common cause of death, sometimes from incapacitation that can prevent evacuation. Hoods also filter carbon dioxide, chlorine, ammonia, acid gases such as hydrogen chloride and hydrogen sulfide, and various hydrocarbons, alcohols, and other solvents. Some hoods also include a filter to block particulate matter. One challenge is where to place the hoods in a highly accessible location near each seat. Certain smoke hoods have been shown to filter out many contaminants typically found in smoke from an airplane cabin fire and to provide some temporary head protection from the heat of fire. In a full-scale FAA test of cabin burnthrough, toxic gases became the driving factor determining survivability in the forward cabin, reaching lethal levels minutes before the smoke and temperature rose to unsurvivable levels. A collaborative effort to estimate the potential benefits of smoke hoods was undertaken in 1986 by the British Civil Aviation Authority (CAA), the Federal Aviation Administration, the Direction Générale de l’Aviation Civile (France) and Transport Canada Civil Aviation. The resulting 1987 study examined the 20 accidents where sufficient data was available out of 74 fire-related accidents worldwide from 1966 to 1985. The results were sensitive to assumptions regarding extent of use and delays due to putting on smoke hoods. The study concluded that smoke hoods could significantly extend the time available to evacuate an aircraft and would have saved approximately 179 lives in the 20 accidents studied, assuming no delay in donning smoke hoods. Assuming a 10 percent reduction in the evacuation rate due to smoke hood use would have resulted in an estimated 145 lives saved in the 20 accidents with adequate data. A 15 second delay in donning the hoods would have saved an estimated 97 lives in the 20 accidents. When the likelihood of use of smoke hoods was included in the analysis for each accident, the total net benefit was estimated at 134 lives saved in the 20 accidents. The study also estimated that an additional 228 lives would have been saved in the 54 accidents where less data was available, assuming no delay in evacuation. The U.S. Air Force and a major manufacturer are developing a drop-down smoke hood with oxygen. Because current oxygen masks in airplanes are not airtight around the mouth, they provide little protection from toxic gases and smoke in an in-flight fire. To provide protection from these hazards, as well as from decompression and postcrash fire and smoke, the Air Force’s drop-down smoke hood with oxygen uses the airplane’s existing oxygen system and can fit into the overhead bin of a commercial airliner where the oxygen mask is normally stowed. This smoke hood is intended to replace current oxygen masks but also be potentially separated from the oxygen source in a crash to provide time to evacuate. Smoke hoods are currently available and produced by several manufacturers; however, not all smoke hoods filter carbon monoxide. They are in use on many military and private aircraft, as well as in buildings. An individually-purchased filter smoke hood costs about $70 or more, but according to one manufacturer bulk order costs have declined to about $40 per hood. In addition, they estimated that hoods cost about $2 a year to install and $5 a year to maintain. They weigh about a pound or less and have to be replaced about every 5 years. Furthermore, airlines could incur additional replacement costs due to theft if smoke hoods were placed near passenger seats in commercial aircraft. Neither the British CAA, the FAA, the DGAC, nor Transport Canada Civil Aviation has chosen to require smoke hoods. The British Air Accident Investigations Branch recommended that smoke hoods be considered for aircraft after the 1985 Manchester accident, in which 48 of 55 passengers died on a runway from an engine fire before takeoff, mainly from smoke inhalation and the effects of hydrogen cyanide. Additionally, a U.K. parliamentary committee recommended research into smoke hoods in 1999, and the European Transport Safety Council, an international nongovernmental organization whose mission is to provide impartial advice on transportation safety to the European Commission and Parliaments, recommended in 1997 that smoke hoods be provided in all commercial aircraft. Canada’s Transportation Safety Board has taken no official position on smoke hoods, but has noted a deficiency in cabin safety in this area and recommended further evaluation of voluntary passenger use. Although smoke hoods are currently available, they remain controversial. Passengers are allowed to bring filter type smoke hoods on an airplane, but FAA is not considering requiring airlines to provide smoke hoods for passengers. The debate over whether smoke hoods should be installed in aircraft revolves mainly around regulatory concerns that passengers will not be able to put smoke hoods on quickly in an emergency; that hoods might hinder visibility, and that any delay in putting on smoke hoods would slow down an evacuation. FAA’s and CAA’s evacuation experiments—to determine how long it takes for passengers to unpack and don smoke hoods and whether an evacuation would be slowed by their use—have reached opposite conclusions about the effects of smoke hoods on evacuation rates. The CAA has noted that delays in putting on smoke hoods by only one or two people could jeopardize the whole evacuation. An opposite view by some experts is that the gas and smoke-induced incapacitation of one or two passengers could also delay an evacuation. FAA believes that an evacuation might be hampered by passengers’ inability to quickly and effectively access and don smoke hoods, by competitive passenger behavior, and by a lack of passenger attentiveness during pre-flight safety briefings. FAA noted that smoke hoods can be difficult to access and use even by trained individuals. However, other experts have noted that smoke hoods might reduce panic and help make evacuations more orderly, that competitive behavior already occurs in seeking access to exits in a fire, and that passengers could learn smoke hood safety procedures in the pre-flight safety briefings in the same way they learn to use drop-down oxygen masks or flotation devices. The usefulness of smoke hoods varies across fire scenarios depending on assumptions about how fast hoods could be put on and how much time would be available to evacuate. One expert told us that the time needed to put on a smoke hood might not be important in several fire scenarios, such as an in-flight fire in which passengers are seeking temporary protection from smoke until the airplane lands and an evacuation can begin. In other scenarios—a ground evacuation or postcrash evacuation — some experts argue that passengers in back rows or far from an exit may have their exit path temporarily blocked as other passengers exit and, because of the delay in their evacuation, may have a greater need and more time available to don smoke hoods than passengers seated near usable exits. Exit slide systems are rarely used during their operational life span. However, when such a system is used, it may be under adverse crash conditions that make it important for the system to work as designed. To prevent injury to passengers and crew escaping through floor-level exits located more than 6 feet above the ground, assist devices (i.e., slides or slide-raft systems) are used. (See fig. 15.) The rapid deployment, inflation, and stability of evacuation slides are important to the effectiveness of an aircraft’s evacuation system, as was illustrated in the fatal ground collision of a Northwest Airlines DC-9 and a Northwest Airlines 727 in Romulus, Michigan, in December 1990. As a result of the collision, the DC-9 caught fire, but there were several slide problems that slowed the evacuation. For example, NTSB later found that the internal tailcone exit release handle was broken, thereby preventing the tailcone from releasing and the slide from deploying. Because of concerns about the operability of exit slides, NTSB recommended in 1974 that FAA improve its maintenance checks of exit slide operations. In 1983, FAA revised its exit slide requirements to specify criteria for resistance to water penetration and absorption, puncture strength, radiant heat resistance, and deployment as flotation platforms after ditching. All U.S. air carriers have an FAA-approved maintenance program for each type of airplane that they operate. These programs require that the components of an airplane’s emergency evacuation system, which includes the exit slides, be periodically inspected and serviced. An FAA principal maintenance inspector approves the air carrier’s maintenance program. According to NTSB, although most air carriers’ maintenance programs require that a percentage of emergency evacuation slides or slide rafts be tested for deployment, the percentage of required on-airplane deployments is generally very small. For example, NTSB found that American Airlines’ FAA-approved maintenance program for the A300 requires an on-airplane operational check of four slides or slide rafts per year. Delta Air Lines’ FAA- approved maintenance program for the L-1011 requires that Delta activate a full set of emergency exits and evacuation slides or slide rafts every 24 months. Under an FAA-approved waiver for its maintenance program, United is not required to deploy any slide on its 737 airplanes. NTSB also found that FAA allows American Airlines to include inadvertent and emergency evacuation deployments toward the accomplishment of its maintenance program; therefore, it is possible that American would not purposely deploy any slides or slide rafts on an A300 to comply with the deployment requirement during any given year. In addition, NTSB found that FAA also allows Delta Air Lines to include inadvertent and emergency evacuation deployments toward the accomplishment of its maintenance program. NTSB holds that because inadvertent and emergency deployments do not occur in a controlled environment, problems with, or failures in, the system may be more difficult to identify and record, and personnel qualified to detect such failures may not be present. For example, in an inadvertent or emergency slide or slide raft deployment, observations on the amount of time it takes to inflate the slide or slide raft, and the pressure level of the slide or slide raft are not likely to be documented. For these reasons, a 1999 NTSB report said that FAA’s allowing these practices could potentially leave out significant details about the interaction of the slide or slide raft with the door or how well the crew follows its training mock-up procedures. Accordingly, in 1999, NTSB recommended that FAA stop allowing air carriers to count inadvertent and emergency deployments toward meeting their maintenance program requirement because conditions are not controlled and important information (on, for example, the interface between the airplane and the evacuation slide system, timing, durability, and stability) is not collected. The recommendation continues to be open at the NTSB. NTSB officials said they would be meeting to discuss this recommendation with FAA in the near future. Additionally, NTSB recommended that FAA, for a 12-month period, require that all operators of transport-category aircraft demonstrate the on- airplane operation of all emergency evacuation systems (including the door-opening assist mechanisms and slide or slide raft deployment) on 10 percent of each type of airplane (at least one airplane per type) in their fleets. NTSB said that these demonstrations should be conducted on an airplane in a controlled environment so that qualified personnel can properly evaluate the entire evacuation system. NTSB indicated that the results of the demonstrations (including an explanation of the reasons for any failures) should be documented for each component of the system and should be reported to FAA. Prompted by a tragedy in which 57 of the 137 people on board a British Airtours B-737 were killed because passengers found exit doors difficult to access and operate, the British Civil Aviation Authority initiated a research program to explore changes to the design of the overwing exit (Type III) door. Trained crewmembers are expected to operate most of the emergency equipment on an airplane, including most floor-level exit doors. But overwing exit doors, termed “self-help exits,” are expected to be and will primarily be opened by passengers without formal training. NTSB reported that even when flight attendants are responsible for opening the overwing exit doors, passengers are likely to make the first attempt to open the overwing exit hatches because the flight attendants are not physically located near the overwing exits. There are now two basic types of overwing exit doors—the “self-help” doors that are manually removed inward and then stowed and the newer “swing out” doors that open outward on a hinge. According to NTSB, passengers continue to have problems removing the inward-opening exit door and stowing it properly. The manner in which the overwing exit is opened and how and where the hatch should be stowed is not intuitively obvious to passengers, nor is it easily or consistently depicted graphically. NTSB recently recommended to FAA that Type III overwing exits on newly manufactured aircraft be easy and intuitive to open and have automatic stowage out of the egress path. NTSB has indicated that the semiautomatic, fast-opening, Type III overwing exit hatch could give passengers additional evacuation time. Over-wing exit doors that “swing out” on hinges rather than requiring manual removal are currently available. The European Joint Aviation Authorities (JAA) has approved the installation of these outward-opening hinged doors on new-production aircraft in Europe. In addition, Boeing has redesigned the overwing exit door for its next-generation 737 series. This redesigned, hinged door has pressurized springs so that it essentially pops up and outward, out of the way, once its lever is pulled. The exit door handle was also redesigned and tested to ensure that anyone could operate the door using either single or double handgrips. Approximately 200 people who were unfamiliar with the new design and had never operated an overwing exit tested the outward-opening exit door. These tests found that the average adult could operate the door in an emergency. The design eliminates the problem of where to stow the exit hatch because the door moves up and out of the egress route. While the new swing-out doors are available, it will take some time for them to be widely used. Because of structural difficulties and cost, the new doors are not being considered for the existing fleet. For new-production airplanes, their use is mixed because JAA requires them in Europe for some newer Boeing 737s, but FAA does not require them in the United States. However, FAA will allow their use. As a result, some airlines are including the new doors on their new aircraft, while others are not. For example, Southwest Airlines has the new doors on its Boeing 737s. The extent to which other airlines and aircraft models will have the new doors installed remains to be seen and will likely depend on the cost of installation, the European market for the aircraft, and any additional costs to train flight attendants in its use. Airbus, a leading aircraft manufacturer, has begun building a family of A380 aircraft, also called Large Transport Aircraft (see fig. 16). Early versions of the A380, which is scheduled to begin flight tests in 2005 and enter commercial service in 2006, will have 482 to 524 seats. The A380-800 standard layout references 555 seats. Later larger configurations could accommodate up to 850 passengers. The A380 is designed to have 16 emergency doors and require 16 escape slides, compared with the 747, which requires 12. Later models of the A380 could have 18 emergency exits and escape slides. The advent of this type of Large Transport Aircraft is raising questions about how passengers will exit the aircraft in an emergency. The upper deck doorsill of the A380 will be approximately 30 feet above the ground, depending on the position (attitude) of the aircraft. According to an Airbus official responsible for exit slide design and operations, evacuation slides have to reach the ground at a safe angle even if the aircraft is tipped up; however, extra slide length is undesirable if the sill height is normal. Previously, regulations would have required slides only to touch the ground in the tip-up case, even if that meant introduction of relatively steep sliding surfaces. However, because of the sill height, passengers may hesitate before jumping and their hesitation may extend the total evacuation time. Because some passengers may be reluctant to leap onto the slide when they can see how far it is to the ground, the design concept of the A380 evacuation slides includes blinder walls at the exit and a curve in the slide to mask the distance to the ground. A next-generation evacuation system developed by Airbus and Goodrich called the “intelligent slide” is a possible solution to the problem of the Large Transport Aircraft’s slide length. The technology is not a part of the slide, but is connected to the slide through what is called a door management system composed of sensors. The “brains” of the technology will be located inside the forward exit door of the cabin, and the technology is designed to adjust the length of the slide according to the fuselage’s tipping angle to the ground. The longest upper-deck slide for an A380 could exceed 50 feet. The A380 slides are made of a nylon-based fabric that is coated with urethane or neoprene, and they are 10 percent lighter than most other slides on the market. They have to be packed tightly into small bundles at the foot of emergency exit doors and are required to be fully inflated in 6 seconds. Officials at Airbus noted that the slides are designed to withstand the radiant heat of a postimpact fire for 180 seconds, compared with the 90 seconds required by regulators. According to a Goodrich official, FAA will require Goodrich to conduct between 2,000 and 2,500 tests on the A380 slides to make sure they can accommodate a large number of passengers quickly and withstand wind, rain, and other weather conditions. The upper-level slides, which are wide enough for two people, have to enable the evacuation of 140 people per minute, according to Airbus officials. An issue to be resolved is whether a full-scale demonstration test will be required or whether a partial test using a certain number of passengers, supplemented by a computer simulation of an evacuation of 555 passengers, can effectively demonstrate an evacuation from this type of aircraft. Airbus officials told us that a full-scale demonstration could result in undesirable injuries to the participants and is therefore not the preferred choice. Officials at the Association of Flight Attendants have expressed concern that there has not been a full-scale evacuation demonstration involving the A380. They are concerned that computer modeling might not really match the human experience of jumping onto a slide from that height. In addition, they are concerned that other systems involved in emergency exiting, such as the communication systems, need to be tested under controlled conditions. As a result, they believe a full-scale demonstration under the current 90-second standard is necessary. All commercial aircraft that fly over water more than 50 nautical miles from the nearest shore are required to be equipped with flotation devices for each occupant of the airplane. According to FAA, 44 of the 50 busiest U.S. airports are located within 5 miles of a significant body of water. In addition, life vests, seat cushions, life rafts, and exit slides may be used as flotation devices for water emergencies. FAA policies dictate that if personal flotation devices are installed beneath the passenger seats of an aircraft, the devices must be easily retrievable. Determinations of compliance with this requirement are based on the judgment of FAA as the certifying authority. FAA is conducting research and testing on the location and types of flotation devices used in aircraft. When it has completed this work, it is likely to provide additional guidance to ensure that the devices are easily retrievable and usable. FAA’s research is designed to analyze human performance factors, such as how much time passengers need to retrieve their vests, whether and how the cabin environment physically interferes with their efforts, and how physically capable passengers are of reaching their vests while seated and belted. FAA is reviewing four different life vest installation methods and has conducted tests on 137 human subjects. According to an early analysis of the data, certain physical installation features significantly affect both the ability of a typical passenger to retrieve an underseat life vest and the ease of retrieval. This work may lead to additonal guidance on the location of personal flotation devices. FAA’s research may also indicate a need for additional guidance on the use of personal flotation devices. In a 1998 report on ditching aircraft and water survival, FAA found that airlines differed in their instructions to passengers on how to use personal flotation devices. For example, some airlines advise that passengers hold the cushions in front of their bodies, rest their chins on the cushions, wrap their arms around the cushions with their hands grasping the outside loops, and float vertically in the water. Other airlines suggest that passengers lie forward on the cushions, grasp and hold the loops beneath them, and float horizontally. FAA also reported that airlines’ flight attendant training programs differed in their instructions on how to don life vests and when to inflate them. This appendix presents information on the background and status of potential advancements in general cabin occupant safety and health that we identified, including the following: advanced warnings of turbulence; preparations for in-flight medical emergencies; reductions in health risks to passengers with certain medical conditions, including deep vein thrombosis; and improved awareness of radiation exposure. This appendix also discusses occupational safety and health standards for the flight attendant workforce. According to FAA, the leading cause of in-flight injuries for cabin occupants is turbulence. In June 1995, following two serious events involving turbulence, FAA issued a public advisory to airlines urging the use of seat belts at all times when passengers are seated, but concluded that the existing rules did not require strengthening. In May 2000, FAA instituted a public awareness campaign, called Turbulence Happens, to stress the importance of wearing safety belts to the flying public. Because of the potential for injury from unexpected turbulence, ongoing research is attempting to find ways to better identify areas of turbulence so that pilots can take corrective action to avoid it. In addition, FAA’s July 2003 draft strategic plan targets a 33 percent reduction in the number of turbulence injuries to cabin occupants by 2008—from an annual average of 15 injuries per year for fiscal years 2000 through 2002 to no more than 10 injuries per year. FAA is currently evaluating new airborne weather radar and other technologies to improve the timeliness of warnings to passengers and flight attendants about impending turbulence. For example, the Turbulence Product Development Team, within FAA’s Aviation Weather Research Program, has developed a system to measure turbulence and downlink the information in real time from commercial air carriers. The International Civil Aviation Organization has approved this system as an international standard. Ongoing research includes (1) detecting turbulence in flight and reporting its intensity to augment pilots’ reports, (2) detecting turbulence remotely from the ground or in the air using radar, (3) detecting turbulence remotely using LIDAR or the Global Positioning System’s constellation of satellites, and (4) forecasting the likelihood of turbulence over the continental United States during the next 12 hours. Prototypes of the in- flight detection system have been installed on 100 737-300s operated by United Airlines, and two other domestic air carriers have expressed an interest in using the prototype. FAA also plans to improve (1) training on standard operating procedures to reduce injuries from turbulence, (2) the dissemination of pilots’ reports of turbulence, and (3) the timeliness of weather forecasts to identify turbulent areas. Furthermore, FAA encourages and some airlines require passengers to keep their seatbelts fastened when seated to help avoid injuries from unexpected turbulence. Currently, pilots rely primarily on other pilots to report when and where (e.g., specific altitudes and routes) they have encountered turbulent conditions en route to their destinations; however, these reports do not accurately identify the location, time, and intensity of the turbulence. Further research and testing will be required to develop technology to accurately identify turbulence and to make the technology affordable to the airlines, which would ultimately bear the cost of upgrading their aircraft fleets. The Aviation Medical Assistance Act of 1998 directed FAA to determine whether the current minimum requirements for air carriers’ emergency medical equipment and crewmember emergency medical training should be modified. In accordance with the act, FAA collected data for a year on in-flight deaths and near deaths and concluded that enhancements to medical kits and a requirement for airlines to carry automatic external defibrillators were warranted. Specifically, the agency found that these improvements would allow cabin crewmembers to deal with a broader range of in-flight emergencies. On April 12, 2001, FAA issued a final rule requiring air carriers to equip their aircraft with enhanced emergency medical kits and automatic external defibrillators by May 12, 2004. Most U.S. airlines have installed this equipment in advance of the deadline. In the future, new larger aircraft may require additional improvements to meet passengers’ medical needs. For example, new large transport aircraft, such as the Airbus A-380, will have the capacity to carry about 555 people on long-distance flights. Some aviation safety experts are concerned that with the large number of passengers on these aircraft, the number of in- flight medical emergencies will increase and additional precautions for in- flight medical emergencies (e.g., dedicating an area for passengers who experience medical emergencies in flight) should be considered. Airbus has proposed a medical room in the cabin of its A-380 as an option for its customers. Passengers with certain medical conditions (e.g., heart and lung diseases) can be at higher risk of health-related complications from air travel than the general population. For example, passengers who have limited heart or lung function or have recently had surgery or a leg injury can be at greater risk of developing a condition known as deep vein thrombosis (DVT) or travelers’ thrombosis, in which blood clots can develop in the deep veins of the legs from extended periods of inactivity. Air travel has not been linked definitively to the development of DVT, but remaining seated for extended periods of time, whether in one’s home or on a long-distance flight, can cause blood to pool in the legs and increase the chances of developing DVT. In a small percentage of cases, the clots can break free and travel to the lungs, with fatal results. In addition, the reduced levels of oxygen available to passengers in-flight can have detrimental health effects on passengers with heart, circulatory, and respiratory disorders because lower levels of oxygen in the air produce lower levels of oxygen in the body—a condition known as hypoxia. Furthermore, changes in cabin pressure (primarily when the aircraft ascends and descends) can negatively affect ear, nose, and throat conditions and pose problems for those flying after certain types of surgery (e.g., abdominal, cardiac, and eye surgery). Information on the potential effects of air travel on passengers with certain medical conditions is available; however, additional research, such as on the potential relationship between DVT and air travel, is ongoing. The National Research Council, in a 2001 report on airliner cabin air quality, recommended, among other things, that FAA increase efforts to provide information on health issues related to air travel to crewmembers, passengers, and health professionals. According to FAA’s Federal Air Surgeon, since this recommendation was received, the agency has redoubled its efforts to make information and recommendations on air travel and medical issues available through its Web site www.cami.jccbi.gov/aam-400/PassengerHandS.htm. This site also includes links to the Web sites of other organizations with safety and health information for air travelers, such as the Aerospace Medical Association, the American Family Physician (Medical Advice for Commercial Air Travelers), and the Sinus Care Center (Ears, Altitude, and Airplane Travel), and videos on safety and health issues for pilots and air travelers. The Aerospace Medical Association’s Web site, http://www.asma.org/publication.html, includes guidance for physicians to use in advising passengers about the potential risks of flying based on their medical conditions, as well as information for passengers to use in determining whether air travel is advisable given their medical conditions. Furthermore, some airlines currently encourage passengers to do exercises while seated, to get up and walk around during long flights, or to do both to improve blood circulation; however, walking around the airplane can also put passengers at risk of injuries from unexpected turbulence. In addition, a prototype of a seat has been designed with imbedded sensors, which record the movement of a passenger and send this information to the cabin crew for monitoring. The crew would then be able to track passengers seated for a long time and could suggest that these passengers exercise in their seats or walk in the cabin aisles to enhance circulation. While FAA’s Web site on passenger and pilot safety and health provides links to related Web sites and videos (e.g., cabin occupant safety and health issues), historically, the agency has not tracked who uses its Web site or how frequently it is used to monitor the traveling public’s awareness and use of this site. Agency officials told us that they plan to install a counter capability on its Civil Aerospace Medical Institute Web site by the end of August 2003 to track the number of visits to its aircrew and passenger health and safety Web site. The World Health Organization has initiated a study to help determine if a linkage exists between DVT and air travel. Further, FAA developed a brochure on DVT that has been distributed to aviation medical examiners and cited in the Federal Air Surgeon’s Bulletin. The brochure is aimed at passengers rather than airlines and suggests exercises that can be done to promote circulation. Pilots, flight attendants, and passengers who fly frequently are exposed to cosmic radiation at higher levels (on a cumulative basis) than the average airline passenger and the general public living at or near sea level. This is because they routinely fly at high altitudes, which places them closer to outer space, which is the primary source of this radiation. High levels of radiation have been linked to an increased risk of cancer and potential harm to fetuses. The amount of radiation that flight attendants and frequent fliers are exposed to—referred to as the dose—depends on four primary factors: (1) the amount of time spent in flight; (2) the latitude of the flight— exposure increases at higher latitudes; for example, at the same altitude, radiation levels at the poles are about twice those at the equator; (3) the altitude of the flight—exposure is greater at high altitudes because the layer of protective atmosphere becomes thinner; and (4) solar activity— exposure is higher when solar activity increases, as it does every 11 years or so. Peak periods of solar activity, which can increase exposure to radiation by 10 to 20 times, are sometimes called solar storms or solar flares. FAA’s Web site currently makes available guidance on radiation exposure levels and risks for flight and cabin crewmembers, as well as a system for calculating radiation doses from flying specific routes and specific altitudes. To increase crewmembers’ awareness of in-flight radiation exposure, FAA issued two Advisory Circulars for crewmembers. The first Advisory Circular, issued in 1990, provided information on (1) cosmic radiation and air shipments of radioactive material as sources of radiation exposure during air travel; (2) guidelines for exposure to radiation; (3) estimates of the amounts of radiation received on air carriers’ flights on various routes to and from, or within, the contiguous United States; and (4) examples of calculations for estimating health risks from exposure to radiation. The second Advisory Circular, issued in 1994, recommended training for crewmembers to inform them about in-flight radiation exposure and known associated health risks and to assist them in making informed decisions about their work on commercial air carriers. The circular provided a possible outline of courses, but left it to air carriers to gather the subject matter materials. To facilitate the monitoring of radiation exposure levels by airliner crewmembers and the public (e.g., frequent fliers), FAA has developed a computer model, which is publicly available via the agency’s Web site. This Web site also provides guidance and recommendations on limiting radiation exposure. However, it is unclear to what extent flight attendants, flight crews, and frequent fliers are aware of and use FAA’s Web site to track the radiation exposure levels they accrue from flying. Agency officials told us that they plan to install a counter capability its Civil Aerospace Medical Institute Web site by the end of August 2003, to track the number of visits to its aircrew and passenger health and safety Web site. FAA also plans to issue an Advisory Circular by early next year, which incorporates the findings of a just completed FAA report, “What Aircrews Should Know About Their Occupational Exposure to Ionizing Radiation.” This Advisory Circular will include recommended actions for aircrew and information on solar flare event notification of aircrew. While FAA provides guidance and recommendations on limiting the levels of cosmic radiation that flight attendants and pilots are exposed to, it has not developed any regulations. In contrast, the European Union issued a directive for workers in May 1996, including air carrier crewmembers (cabin and flight crews) and the general public, on basic safety and health protections against dangers arising from ionizing radiation. This directive set dose limits and required air carriers to (1) assess and monitor the exposure of all crewmembers to avoid exceeding exposure limits, (2) work with those individuals at risk of high exposure levels to adjust their work or flight schedules to reduce those levels, and (3) inform crewmembers of the health risks that their work involves from exposure to radiation. It also required airlines to work with female crewmembers, when they announce a pregnancy, to avoid exposing the fetus to harmful levels of radiation. This directive was binding for all European Union member states and became effective in May 2000. According to European safety officials, pregnant crewmembers are often given the option of an alternative job with the airline on the ground to avoid radiation exposure to their fetuses. Furthermore, when flight attendants and pilots reach recommended exposure limits, European air carriers work with crewmembers to limits or change their subsequent flights and destinations to minimize exposure levels for the balance of the year. Some air carriers ground crewmembers when they reach annual exposure limits or change their subsequent flights and destinations to minimize exposure levels for the balance of the year. In 1975, FAA assumed responsibility from the Occupational Health and Safety Administration (OSHA) for establishing safety and health standards for flight attendants. However, FAA has only recently begun to take action to provide this workforce with OSHA-like protections. For example, in August 2000, FAA and OSHA entered into a memorandum of understanding and issued a joint report in December 2000, which identified safety and health concerns for the flight attendant workforce and the extent to which OSHA-type standards could be used without compromising aviation safety. On September 29, 2001, the DOT Office of the Inspector General (DOT IG) reported that FAA had made little progress toward providing flight attendants with workplace protections and urged FAA to address the recommendations in the December 2000 report and move forward with setting safety and health standards for the flight attendant workforce. In April 2002, the DOT IG reported that FAA and OSHA had made no progress since it issued its report in September 2001. According to FAA officials, the joint FAA and OSHA effort was put on hold because of other priorities that arose in response to the events of September 11, 2001. FAA has not yet established occupational safety and health standards to protect the flight attendant workforce. FAA is conducting research and collecting data on flight attendants’ injuries and illnesses. On March 4, 2003, FAA announced the creation of a voluntary program for air carriers, called the Aviation Safety and Health Partnership Program. Through this program, the agency intends to enter into partnership agreements with participating air carriers, which will, at a minimum, make data on their employees’ injuries and illnesses available to FAA for collection and analysis. FAA will then establish an Aviation Safety and Health Program Aviation Rule-Making Committee to provide advice and recommendations to develop the scope and core elements of the partnership program review and analyze the data on employees’ injuries and illnesses; identify the scope and extent of systematic trends in employees’ injuries and illnesses; recommend remedies to FAA that use all current FAA protocols, including rule-making activities if warranted, to abate hazards to employees; and create any other advisory and oversight functions that FAA deems necessary. FAA plans to select members to provide a balance of viewpoints, interests, and expertise. The program preserves FAA’s complete and exclusive responsibility for determining whether proposed abatements of safety and health hazards would compromise or negatively affect aviation safety. FAA is also funding research through the National Institute for Occupational Safety and Health (NIOSH) to, among other things, determine the effects of flying on the reproductive health of flight attendants, much of which has been completed. FAA plans to monitor cabin air quality on a selected number of flights, which will help it set standards for the flight attendant workforce. The Association of Flight Attendants has collected a large body of data on flight attendants’ injuries and illnesses, which it considers sufficient for use in establishing safety and health standards for its workforce. Officials from the association do not believe that FAA needs to collect additional data before starting the standard-setting process. The European Union has occupational safety and health standards in place to protect flight attendants, including standards for monitoring their levels of radiation exposure. An official from an international association of flight attendants told us that while flight attendants in Europe have concerns similar to those of flight attendants in the United States (e.g., concerns about air quality in airliner cabins), the European Union places a heavier emphasis on worker safety and health, including safety and health protections for flight attendants. The following illustrates how a cost analysis might be conducted on each of the potential advancements discussed in this report. Costs estimated through this analysis could then be weighed against the potential lives saved and injuries avoided from implementing the advancements. This methodology would allow advancements to be compared using comparable cost data that when combined with similar analyses of effectiveness to help decisionmakers determine which advancements would be most effective in saving lives and avoiding injuries, taking into account their costs. The methodology provides for developing a cost estimate despite significant uncertainties by making use of historical data (e.g, historical variations in fuel prices) and best engineering judgments (e.g., how much weight an advancement will add and how much it will cost to install, operate, and maintain). The methodology formally takes into account the major sources of uncertainty and from that information develops a range of cost estimates, including a most likely cost estimate. Through a common approach for analyzing costs, the methodology facilitates the development of comparable estimates. This methodology can be applied to advancements in various stages of development. Inflatable lap belts are designed to protect passengers from a fatal impact with the interior of the airplane, the most common cause of death in survivable accidents. Inflatable seat belts adapt advanced automobile technology to airplane seats in the form of seat belts with air bags embedded in them. Several hundred of these seatbelt airbags have been installed in commercial airliners in bulkhead rows. We calculated that requiring these belts on an average-sized airplane in the U.S. passenger fleet would be likely to cost from $98,000 to $198,000 and to average about $140,000 over the life of the airplane. On an annual basis, the cost would be likely to range from $8,000 to $17,000 and to average $12,000. We considered several factors to explain this range of possible costs. The installation price of these belts is subject to uncertainty because of their limited production to date. In addition, these belts add weight to an aircraft, resulting in additional fuel costs. Fuel costs depend on the price of jet fuel and on how many hours the average airplane operates, both subject to uncertainty. Table 5 lists the results of our cost analysis for an average- sized airplane in the U.S. fleet. According to our analysis, the life-cycle and annualized cost estimates in table 5 are influenced most by variations in jet fuel prices, followed by the average number of hours flown per year and the installation price of the belts. The cost per ticket is influenced most by variations in jet fuel prices, followed by the average number of hours flown per year, the number of aircraft in the U.S. fleet, and the number of passenger tickets issued. To analyze the cost of inflatable lap belts, we collected data on key cost variables from a variety of sources. Information on the belts’ installation price, annual maintenance and refurbishment costs, and added weight was obtained from belt manufacturers. Historical information on jet fuel prices, extra gallons of jet fuel consumed by a heavier airplane, average hours flown per year, average number of seats per airplane, number of airplanes in the U.S. fleet, and number of passenger tickets issued per year was obtained from FAA and DOT’s Office of Aviation Statistics. To account for variation in the values of these cost variables, we performed a Monte Carlo simulation. In this simulation, values were randomly drawn 10,000 times from probability distributions characterizing possible values for the number of seat belts per airplane, seat belt installation price, jet fuel price, number of passenger tickets, number of airplanes, and hours flown.This simulation resulted in forecasts of the life-cycle cost per airplane, the annualized cost per airplane, and the cost per ticket. Major assumptions in the cost analysis are described by probability distributions selected for these cost variables. For jet fuel prices, average number of hours flown per year, and average number of seats per airplane, historical data were matched against possible probability distributions.Mathematical tests were performed to find the best fit between each probability distribution and the data set’s distribution. For the installation price, number of passenger tickets, and number of airplanes, less information was available. For these variables, we selected probability distributions that are widely used by researchers. Table 6 lists the type of probability distribution and the relevant parameters of each distribution for the cost variables. In addition to those named above, Chuck Bausell, Helen Chung, Elizabeth Eisenstadt, David Ehrlich, Bert Japikse, Sarah Lynch, Sara Ann Moessbauer, and Anthony Patterson made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | Airline travel is one of the safest modes of public transportation in the United States. Furthermore, there are survivors in the majority of airliner crashes, according to the National Transportation Safety Board (NTSB). Additionally, more passengers might have survived if they had been better protected from the impact of the crash, smoke, or fire or better able to evacuate the airliner. As requested, GAO addressed (1) the regulatory actions that the Federal Aviation Administration (FAA) has taken and the technological and operational improvements, called advancements, that are available or are being developed to address common safety and health issues in large commercial airliner cabins and (2) the barriers, if any, that the United States faces in implementing such advancements. FAA has taken a number of regulatory actions over the past several decades to address safety and health issues faced by passengers and flight attendants in large commercial airliner cabins. GAO identified 18 completed actions, including those that require safer seats, cushions with better fire-blocking properties, better floor emergency lighting, and emergency medical kits. GAO also identified 28 advancements that show potential to further improve cabin safety and health. These advancements vary in their readiness for deployment. Fourteen are mature, currently available, and used in some airliners. Among these are inflatable lap seat belts, exit doors over the wings that swing out on hinges instead of requiring manual removal, and photoluminescent floor lighting. The other 14 advancements are in various stages of research, engineering, and development in the United States, Canada, or Europe. Several factors have slowed the implementation of airliner cabin safety and health advancements. For example, when advancements are ready for commercial use, factors that may hinder their implementation include the time it takes for (1) FAA to complete the rule-making process, (2) U.S. and foreign aviation authorities to resolve differences between their respective requirements, and (3) the airlines to adopt or install advancements after FAA has approved their use. When advancements are not ready for commercial use because they require further research, FAA's processes for setting research priorities and selecting research projects may not ensure that the limited federal funding for cabin safety and health research is allocated to the most critical and cost-effective projects. In particular, FAA does not obtain autopsy and survivor information from NTSB after it investigates a crash. This information could help FAA identify and target research to the primary causes of death and injury. In addition, FAA does not typically perform detailed analyses of the costs and effectiveness of potential cabin occupant safety and health advancements, which could help it identify and target research to the most cost-effective projects. |
The Coast Guard’s fiscal year 2007 budget request shows continued growth but at a more moderate pace than that of the past 2 years. The current budget request reflects a proposed increase of about $328 million, compared to increases for each of the past 2 budget years that exceeded $500 million for each year. (See fig. 1.) About $5.5 billion, or more than 65 percent of the total funding request of $8.4 billion, is for operating expenditures. The acquisition, construction, and improvements (AC&I) account amounts to another $1.2 billion, or about 14 percent, and the remainder is primarily for retiree pay and healthcare fund contributions. (See app. II for more detail on the Coast Guard’s fiscal years 2002-2007 budget accounts.) If the Coast Guard’s total budget request is granted, overall funding will have increased by more than 50 percent since fiscal year 2002, an increase of $2.82 billion. According to Coast Guard officials, much of the additional $328 million in this fiscal year’s budget request, which is about 4 percent over and above the fiscal year 2006 budget of $8.1 billion, covers such things as salary and benefit increases and maintenance. In addition, more than $57 million of this increase is to establish a permanent National Capital Region Air Defense program to enforce the National Capital Region no-fly zone, a program previously conducted by U.S. Customs and Border Protection (CBP). By comparison, the increases for the AC&I account for this time period have been even greater than the overall funding increase, growing by 66 percent since fiscal year 2002. However, the fiscal year 2007 AC&I budget request of almost $1.2 billion represents little change in funding from the Coast Guard’s fiscal year 2006 enacted AC&I budget. Even with sustained homeland security responsibilities, aging assets, and a particularly destructive hurricane season stretching resources across the agency, in fiscal year 2005 the Coast Guard reported that 7 of its 11 programs met or exceeded program performance targets. In addition, the agency reported that it anticipates meeting the target for 1 additional program when final results become available in July 2006, potentially bringing the total met targets to 8 out of 11 programs. According to Coast Guard documents, the agency missed targets for three programs— undocumented migrant interdiction, defense readiness, and living marine resources—in fiscal year 2005, as it had in some previous years. Coast Guard officials attributed these missed targets to, among other factors, the increased flow of migrants and staffing shortages for certain security units within the defense readiness program. (See app. III for more detailed information on each program.) If the Coast Guard meets 8 performance targets as it predicts, the results would represent the greatest number of performance targets met in the last 4 years. (See fig. 2.) The preliminary results of our ongoing work reviewing the Coast Guard’s six non- homeland security performance measures suggests that, for the most part, the data used for the measures are reliable and the measures themselves are sound. That is, they are objective, measurable, and quantifiable as well as cover key program activities. However, given the DHS policy of reporting only one main performance measure per program and the limits on how comprehensive a single measure is likely to be, there may be opportunities to provide additional context and information to decisionmakers about Coast Guard performance results. We will provide final results on this work in a report to be published later this summer. This overall progress came in a year when the Coast Guard faced significant additional demands brought on by Hurricane Katrina. As it had to do when it implemented MTSA and when it conducted heightened port security patrols immediately after the September 2001 terrorist attacks, the Coast Guard found itself operating at an increased operational tempo for part of fiscal year 2005. Although the Hurricane Katrina response period was relatively brief for some missions, such as search and rescue, Coast Guard officials told us that the sheer magnitude of the response made it unique, and responding to it tested the agency’s preparedness and ability to mobilize large numbers of personnel and assets within a short time. In this effort, the Coast Guard had several responsibilities during and immediately following the hurricane: to conduct search and rescue; to direct the closing and re-opening of ports in cooperation with stakeholders,(such as shipping companies, harbor police, DHS, CBP, and local fire and police departments), to ensure safety and facilitate commerce, thereby lessening the economic impact of the storm on the nation; and to monitor pollution clean up of the many oil spills that occurred in the wake of the flooding. For the purposes of this testimony, I would like to focus on the Coast Guard’s search and rescue response. We are conducting a more complete review of the Coast Guard’s role and response to Hurricane Katrina across several mission areas under the authority of the Comptroller General, and expect to provide additional information later this summer. So far, however, this work is showing that three factors appear to have been key to the Coast Guard’s response to Hurricane Katrina: The Coast Guard was prepared to respond to search and rescue needs. Although the magnitude of Hurricane Katrina required substantial response and relief efforts, the Coast Guard was well prepared to act since it places a priority on training and contingency planning. First and foremost, the missions the Coast Guard performed during Hurricane Katrina were the same missions that the Coast Guard trains for and typically performs on a day-to-day basis. The Coast Guard’s mission areas include, among others, search and rescue, law enforcement, regulatory functions, and, most recently, homeland security responsibilities, allowing the Coast Guard to respond and act in a myriad of situations. However, with regard to Hurricane Katrina, the magnitude of the Coast Guard’s mission activity appears noteworthy. For example, for all of 2004, according to the Coast Guard’s Fiscal Year 2005 Report, the Coast Guard responded to more than 32,000 calls for rescue assistance and saved nearly 5,500 lives. By comparison, in 17 days of Hurricane Katrina response, Coast Guard officials reported conducting over 33,500 rescues, including rescuing 24,135 people by boat and helicopter and evacuating 9,409 people from hospitals. Coast Guard officials we spoke to underscored the importance of the planning, preparation, and training that they regularly conduct that allowed them to complete the many challenging missions presented by Katrina. The Coast Guard’s organizational structure and practices facilitated the agency’s response. In terms of the Coast Guard’s organizational structure, the Coast Guard has personnel and assets throughout the United States, which allows for more flexible response to threats. In terms of Coast Guard practices, according to the hurricane and severe weather plans we reviewed for Coast Guard Districts 7 (Florida region) and 8 (Gulf region), and discussions we had in Washington, D.C., Virginia, Florida, Alabama, and Louisiana with Coast Guard officials responsible for implementing those plans, the Coast Guard tracks the likely path of an approaching storm, anticipates the necessary assets to address the storm’s impact, and repositions personnel and aircraft out of harm’s way, with a focus on reconstituting assets to respond to local needs once it is safe to do so. Given the magnitude of Hurricane Katrina, the Coast Guard took a more centralized approach to prioritize personnel and assets to respond, but the operational command decisions remained at the local level. That is, the Coast Guard’s Atlantic Area Command played a key role in identifying additional Coast Guard resources, and worked with District Commands to quickly move those resources to the affected Gulf region, while local operational commanders directed personnel and assets to priority missions based on their on-scene knowledge. The Coast Guard’s operational principles facilitated the agency’s actions. Throughout our field work, Coast Guard officials referred to the principles of Coast Guard operations that guide the agency’s actions. Coast Guard officials identified these principles, which ranged from the importance of having clear objectives and flexibility to managing risks and exercising restraint, as instrumental in their preparation for Hurricane Katrina. The Coast Guard prides itself on these operational principles that collectively form the foundation of Coast Guard culture and actions during operations. These principles set an expectation for individual leadership in crisis, and personnel are trained to take responsibility and action as needed based on relevant authorities and guidance. For example, during the initial response to Hurricane Katrina, a junior-level pilot, who first arrived on-scene in New Orleans with the planned mission of conducting an environmental inspection flight, recognized that search and rescue helicopters in the area could not communicate with officials on the ground, including those located at hospitals and at safe landing areas. This pilot took the initiative while on-scene—an operational principle—to redirect her planned mission, changing it from an environmental flight to creating the first airborne communication platform in the area. Doing so helped ensure that critical information was relayed to and from helicopter pilots conducting search and rescue so that they could more safely and efficiently continue their vital mission. When we consulted her commanding officer about these actions, he supported her decision and actions and noted that Coast Guard personnel generally have the flexibility to divert from their intended mission to accomplish a more important mission, without obtaining advance supervisory approval. He indicated that this was not only common practice, but it was supported by a written directive at his unit. While acknowledging the importance of these operational principles, it is equally important to note that the response to Hurricane Katrina also hinged on discipline and adherence to critical plans. For example, multiple aircraft were operating in a confined space with little separation, thus adhering to critical search and rescue plans, as well as using experience and judgment, resulted in numerous rescues despite these difficult circumstances. While the Hurricane Katrina search and rescue effort was unprecedented, sustaining this effort might have been much more difficult if it had gone on for a much longer period. Combining a longer-term catastrophic response with the continuing needs of the agency’s day-to-day missions would be more challenging for a small service such as the Coast Guard. Relative to other military services, the Coast Guard is small, and when resources are shifted to any one specific mission area, other mission areas may suffer. For example, Coast Guard units in Florida sent many air and surface assets to the Gulf region to respond to Hurricane Katrina. While the assets were deployed to the Gulf region, the Coast Guard noticed a spike in the level of illegal migration activity off of the Florida coast. However, once Coast Guard assets returned to the Florida region, the Coast Guard initiated a more intensive air and sea patrol schedule to markedly announce their return to the area, and focus on interdicting illegal migrants. Coast Guard organizational changes and expanded partnerships have helped to alleviate some resource pressures posed by added responsibilities or further deterioration of assets, as well as help accomplish its mission responsibilities. I would like to highlight three of these efforts: a revised field structure that consolidates decision-making processes at the operational level into a single command, a new resource for confronting and neutralizing terrorist activity, and new and stronger partnerships both within and outside DHS. In conducting our work for this hearing, we followed up with the Coast Guard to obtain an update on the implementation of a new field command structure that unifies previously disparate Coast Guard units, such as air stations, groups, and marine safety offices into integrated commands. As we reported to you last year, the Coast Guard began making this change to improve mission performance through better coordination of Coast Guard command authority with operational resources such as boats and aircraft. Under the previous field structure, for example, a marine safety officer who had the authority to inspect a vessel at sea or needed an aerial view of an oil spill as part of an investigation would often have to coordinate a request for a boat or an aircraft through a district office, which would obtain the resource from a group or air station. Under the realignment, these operational resources are to be available under the same commanding officer—allowing for more efficient operations. This revised structure involves dividing operations into 35 geographic “sectors.” Coast Guard officials stated that all 35 sectors have been established as of May 2006. According to Coast Guard personnel, the realignment is particularly important for coordinating with other federal, state, and local agencies, as well as meeting new homeland security responsibilities and preparing for the challenge of protecting the United States against terrorist attacks. Another initiative to protect the United States against terrorist attacks is the Coast Guard’s development and implementation of a Maritime Security Response Team (MSRT)—a prototype team similar to DOD’s counter- terrorism teams. The Coast Guard, in cooperation with DOD and other federal law enforcement agencies, plans to outfit the MSRT with specialized tactical equipment and train the team to conduct high-risk boardings of vessels and perform other offensive counter-terrorism activities within the maritime environment. The Coast Guard’s $4.7 million request for fiscal year 2007 would provide the team with chemical, biological, radiological, nuclear, and explosive detection equipment; improve the Coast Guard’s Special Missions Training Center facility; and provide additional personnel and operating capacity for a third 60-member unit, building the team toward 24/7 response capabilities. Coast Guard officials said that once the MSRT is fully developed, it will provide active counter-terrorism and advanced interdiction operations and address capacity and capability gaps in national maritime counter-terrorism response. In addition to partnering efforts associated with the development of the first MSRT, the Coast Guard is developing other partnerships, both internal and external to DHS, designed in part to improve operational effectiveness and efficiency. For example, the Coast Guard is currently developing a pilot program to increase operational efficiencies between the Coast Guard and CBP aimed at pushing potential threats away from U.S. ports. This offshore operation, currently in a pilot stage, includes the integration of each agency’s vessel targeting efforts, unifies their boarding operations, and includes professional exchange opportunities. Although this effort is only being tested within the Pacific Area Command of the Coast Guard, according to a senior Coast Guard official, the Pacific Command intends to send its results to Coast Guard headquarters so the agency can determine how to best implement the program across the Coast Guard at a later date. In addition to partnering with other federal agencies, the Coast Guard has also initiated partnerships with both government and industry. Under regulations implementing MTSA, a Coast Guard Captain of the Port must develop an Area Maritime Security Plan in consultation with an Area Maritime Security Committee. These committees are typically composed of members from federal, local, and state governments; law enforcement agencies; maritime industry and labor organizations; and other port stakeholders that may be affected by security policies. The security plan they develop is intended to provide a communication and coordination framework for the port stakeholders and law enforcement officials to follow in addressing security vulnerabilities and responding to any incidents. Stakeholders in two ports we visited identified their Area Maritime Security Committees as an invaluable forum for port partners. For example, they said meetings of these committees serve as an opportunity for members of the port community to network with one another, build relationships, address various maritime-related issues, and coordinate security planning efforts. The Coast Guard has expanded its partnership with NOAA to enforce domestic fisheries regulations. NOAA operates a technology-based system, called the vessel monitoring system, to track and monitor fishing vessels. This system offers real-time data on a ship’s course and position, where the ship has requested to fish, the type of fishing requested, and the number of days the ship has been out of port. The Coast Guard uses this information to assist with its enforcement of domestic fisheries regulations by identifying vessels that may not be in compliance with domestic fisheries regulations. For example, the monitoring information will show if fishing vessels are operating within a restricted area. According to Coast Guard officials, the information shared from this partnership has allowed Coast Guard assets to be used more efficiently in checking on potentially noncompliant vessels and enforcing fishing laws. Our recent reviews indicate that while the Coast Guard has made progress in managing the Deepwater program, further actions are needed and the lessons learned from this effort have not been applied to other ongoing acquisitions. For example, even with the Coast Guard’s improved management and oversight of its Deepwater program, further steps are needed before all of our past recommendations for improving accountability and program management can be considered fully implemented. In addition, the acquisition of Fast Response Cutters has recently experienced setbacks. Meanwhile, the Rescue 21 program continues to be of concern as the program has been plagued by delays, technical problems, and cost escalation—issues that parallel the problems encountered in the early years of the Deepwater program. Another program, the Nationwide Automatic Identification System, is still in early development stages and specific technical system requirements remain undefined. As a result, according to Coast Guard officials, this has affected the Coast Guard’s efforts to respond to our recommendation that the agency cultivate potential partnerships in order to leverage resources toward implementing the system. Because all of these programs are important for the Coast Guard in meeting growing operational demands, they bear close monitoring to help ensure they are delivered in an efficient and effective manner. One of the largest and most significant acquisitions that the Coast Guard has undertaken is the upgrade and replacement of its Deepwater assets, an acquisition approach that has raised a number of management and accountability concerns over the past 8 years. The Coast Guard has devoted considerable attention to concerns that we and others raised, in particular to implementing recommendations for improvement. Our past concerns about the Deepwater program have been in three main areas— ensuring better program management and oversight, ensuring greater accountability on the part of the system integrator, and creating sufficient competition to help act as a control on costs—and to address these concerns, we made a total of 11 recommendations. Table 1 provides an overview of the 11 recommendations, including their current status. In short, five recommendations have been fully implemented, five have been partially implemented, and one has not been implemented. Three of the five partially implemented recommendations appear close to being fully implemented, in that the actions taken appear to be sufficient but results are not yet known or final procedural steps (such as issuing a policy currently in draft form) are not complete. The remaining two partially implemented recommendations, both of which deal with effective program management and contractor oversight, remain somewhat more problematic, and these are discussed further below. In both cases, however, the steps needed to fully implement these recommendations are relatively straightforward. In 2004, we reported that the integrated product teams (IPTs), the Coast Guard’s primary tool for managing the Deepwater program and overseeing contractor activities, were struggling to carry out their missions because of four major issues: (1) lack of timely charters to provide authority needed for decision making, (2) inadequate communication among team members, (3) high staff turnover, and (4) insufficient training. Despite progress in addressing these four issues, we do not consider this recommendation to be fully implemented. There are indications that the IPTs are still not succeeding in developing sufficient collaboration among subcontractors. Coast Guard officials recently reported that collaboration among the subcontractors continues to be problematic and that the system integrator wields little influence to compel decisions among them. For example, when dealing with proposed design changes for assets under construction, the system integrator has submitted the changes as two separate proposals from both first-tier subcontractors rather than coordinating the separate proposals into one coherent plan. According to Coast Guard performance monitors, because the two proposals often carry a number of overlapping work items, this approach complicates the Coast Guard’s review of the needed design change. Several improvements designed to address these problems are under way, but it is too early to determine if these will effectively eliminate the problems. In 2004, we reported the Coast Guard had not effectively communicated decisions on how new Deepwater and existing assets are to be integrated during the transition and whether Coast Guard or contractor personnel (or a combination of the two) will be responsible for maintenance of the Deepwater assets. For example, Coast Guard field personnel, including senior-level operators and naval engineering support command officials, said they had not received information about how they would be able to continue meeting their missions using existing assets while also being trained on the new assets. Since that time the Coast Guard has placed more emphasis on outreach to field personnel, including surveys, face-to- face meetings, and membership in IPTs. Despite these efforts, there are indications that the actions are not yet sufficient to consider the recommendation to be fully implemented. In particular, our review of relevant documents and discussions with key personnel make clear that field operators and maintenance personnel are still concerned that their views are not adequately acknowledged and addressed, and have little information about maintenance and logistics plans for the new Deepwater assets. For example, though the first National Security Cutter is to be delivered in August 2007, field and maintenance officials have yet to receive information on plans for crew training, necessary shore facility modifications, or how maintenance and logistics responsibilities will be divided between the Coast Guard and the system integrator. According to Coast Guard officials, many of these decisions need to be made and communicated very soon in order to allow for proper planning and preparation in advance of the National Security Cutter’s delivery. Despite improvements in Deepwater program management, the Coast Guard has encountered difficulties in the conversion and acquisition of one Deepwater asset—its Fast Response Cutter (FRC). Under the original 2002 Deepwater Implementation Plan, all 49 of the Coast Guard’s 110-foot patrol boats were to be converted into 123-foot patrol boats, with increased capabilities, as a bridging strategy until a replacement vessel, the 140-foot FRC, came on line beginning in 2018. The Coast Guard converted 8 of the 110-foot patrol boats to 123-foot boats, but discontinued further conversions because the patrol boats were experiencing technical difficulties, such as hull buckling, and were not able to meet post- September 11, 2001 mission requirements. This prompted the Coast Guard to revise this part of the Deepwater program. The 2005 Revised Deepwater Implementation Plan reflected the Coast Guard’s cancellation of further patrol boat conversions and acceleration of the design and delivery of the FRC, which was being designed to use composite materials in the hull, decks and bulkheads. Under the 2005 revised plan, the first FRC was scheduled to come on line in 2007—11 years earlier than originally planned. In late February 2006, the Coast Guard suspended design work on the FRC because of risks with the emerging design. In particular, an independent design review by third-party consultants preliminarily demonstrated, among other things, that the FRC would be far heavier and less efficient than a typical patrol boat of similar length. As a result, the Coast Guard is now pursuing three strategies for moving forward with the FRC acquisition. The first strategy involves Integrated Coast Guard Systems, the prime contractor, purchasing design plans for and building an “off-the- shelf” patrol boat that could be adapted for Coast Guard use as a way to increase patrol hours until the FRC design could be finalized. The Coast Guard issued a request for information in April 2006 to assess the off-the shelf options. The second strategy is to revise the requirements of the FRC in order to allow for modifications to the current FRC design. Concurrent with the first two strategies, the Coast Guard’s third strategy is to have a third party reassess the analyses used in the decision to use composite materials for the FRC to determine if the use of composite materials will, in fact, reduce total ownership costs. The result of the Coast Guard pursuing these strategies is that the Coast Guard would end up with two classes of FRCs. The first class of FRCs would be based on an adapted design from a patrol boat already on the market, to expedite delivery, and a follow-on class that would be based on revisions made to address the problems identified in the original FRC design plans. Pursuant to these three strategies, Coast Guard officials now estimate that the first FRC will likely not be delivered until late fiscal year 2009, at the earliest. GAO plans to release a report in late June 2006 providing updated information on the status of FRC design efforts. The Rescue 21 acquisition program—the Coast Guard’s effort to replace its antiquated command, control and communication infrastructure used primarily to monitor mariner distress calls, and coordinate search and rescue operations—continues to be of concern as the program has been plagued by numerous delays, technical problems, and cost overruns. GAO’s recently released report shows that the program is about 5 years behind its originally proposed schedule for full implementation in 2006, as a result primarily of delays in development and testing of the system. In addition, these delays have raised the Coast Guard’s estimated costs for bringing Rescue 21 up to full operating capability from $250 million to $710.5 million. Moreover, our analysis of contractor performance trends, including a significant number of contract items not completed as planned and requiring renegotiation, indicates that total acquisition cost overruns will continue, and implementation costs could reach as high as $872 million. These delays, technical problems, and cost overruns are the result of deficiencies in Coast Guard acquisition management and contractor oversight—deficiencies similar to those that we identified earlier in the Deepwater program. Such a pattern is of concern because it suggests that the Coast Guard has not translated the lessons learned from Deepwater to its overall acquisition management. In particular, deficiencies in the Rescue 21 program include common problems of acquisition management and oversight including ineffective project monitoring and risk management, poorly defined user requirements, unrealistic schedule and cost estimates developed by the contractor, and limited executive-level oversight. And although the Coast Guard has developed the high-level requirements for Rescue 21, it has relied solely on the contractor to manage these requirements. As discussed, we found similar problems in the Deepwater program with comparable adverse impacts on cost, schedule and results. For example, at the start of the program we identified a number of risks that would need to be addressed for the program to be successful—including ensuring that procedures and personnel are in place for managing and overseeing the contractor, and taking steps to minimize potential problems in developing new technology. Since that time, we have made numerous specific recommendations to the Coast Guard based on the deficiencies uncovered by our audits. The delays in implementing Rescue 21 mean that field units will continue to face limitations in their ability to hear boaters in distress and the agency will be subject to cost and performance challenges to maintain the legacy equipment. For example, as a result of Rescue 21’s delay, some field units will likely continue to experience coverage gaps, limiting their ability to monitor mariners in distress and some will continue to be at risk of performing larger and potentially more costly searches due to the legacy system’s more limited capabilities. In addition, because the legacy equipment is over 30 years old, it is at high risk for failure, a factor which could result in costly repairs. Moreover, although the Coast Guard previously issued a moratorium on upgrades to the legacy system, delays in the Coast Guard’s implementation of Rescue 21 may require units to upgrade or install new equipment for the legacy system. This would result in further costs, and in fact, this has already occurred at some units. The importance of resolving acquisition management problems is underscored by the operational benefits that are expected to be realized from system implementation, and some of these benefits have already been achieved in a few locations where the Rescue 21 system has been used. For example, following Hurricane Katrina, the Coast Guard took advantage of Rescue 21’s capabilities to address communications challenges through an early deployment of a portable antenna to Louisiana in September 2005 to provide communications capabilities that had been lost due to the storm. In another case, the direction-finding capability of the Rescue 21 system helped the Coast Guard to rescue some stranded boaters who had inaccurately identified their location to the Coast Guard. The Coast Guard is at an early phase in developing the Nationwide Automatic Identification System (NAIS)—an important step in the overall effort to increase port safety and security by collecting, integrating, and analyzing information on vessels operating within or bound for U.S. waters—and is pursuing partnership opportunities that could potentially accomplish NAIS installation goals more quickly and reduce installation costs to the federal government. According to the Coast Guard, NAIS will allow the Coast Guard to both receive and transmit information to vessels entering and leaving U.S. waters, supporting both MTSA and the National Plan to Achieve Maritime Domain Awareness. In July 2004, we recommended that the Coast Guard seek and take advantage of opportunities to partner with organizations willing to develop systems at their own expense as part of the acquisition process. In response, according to Coast Guard officials, the agency has begun to develop partnerships. However, officials noted that because the project and technology are still in the early stages of development, these partnerships remain limited. For example, Coast Guard officials said that because the Coast Guard still does not know all of the specific technical system requirements, they do not yet know of all the potential partners that could enable the Coast Guard to leverage resources. In addition, system requirements may change as the technology is further developed, and as a result, some current partnerships may be short-term. The Coast Guard intends to use the fiscal year 2007 budget request of $11.2 million, along with past unobligated project funding, to award a NAIS contract in fiscal year 2007 for initial design, logistics, and deployment in strategic ports and critical coastal areas of the country. According to the Coast Guard, officials are performing market research as part of the development phase of the Coast Guard and DHS major acquisition processes, and the project office is analyzing this information to determine capabilities within the market to satisfy NAIS requirements and to establish an optimal acquisition strategy. Coast Guard officials we spoke with noted that NAIS is currently in the initial stage of a major acquisition project. As such, the acquisition project plans for costs, schedule, and performance have not yet been established. The Coast Guard expects these project plans to be determined later this year and stated that both the baseline costs and current completion schedule are early estimates and subject to revision as final requirements mature. The Coast Guard also faces two additional challenges in managing its assets and balancing its various missions. The first challenge is to find the resources to replace some additional assets, not included in the Deepwater program, for its non-homeland security missions. Our ongoing work found that some of the Coast Guard’s existing buoy tenders and icebreakers are approaching or have exceeded their initial design service lives. The second challenge the Coast Guard faces is the addition of a new mission, defending the air space surrounding the nation’s capital, which falls outside its traditional focus on the maritime environment. While groundwork has been laid through the request of fiscal year 2007 funds to purchase the equipment necessary to carry out this new responsibility, it is likely to require additional personnel and training. To facilitate maritime mobility through its aids-to-navigation (ATON) and icebreaking missions, the Coast Guard uses a variety of assets, such as buoy tenders and icebreakers. Like the Deepwater legacy assets, many of these types of assets are approaching or have exceeded their initial design service lives. We are currently conducting work for this committee to look at the condition and the Coast Guard’s actions to upgrade or better manage these assets. While this work is still ongoing, our preliminary observations indicate that some of these assets are experiencing maintenance issues that may require additional resources in order to sustain or replace their capabilities. From 2000 to 2004, the Coast Guard’s key condition measures show a decline for some ATON and icebreaking assets. For ATON and icebreaking cutter assets, the key summary measure of condition— percent of time free of major casualties—fluctuated but generally remained below target levels for some asset types. According to Coast Guard officials, even though it did not have a centralized tracking system for the condition of its ATON small boat assets during this time period, the Coast Guard’s overall assessments of these smaller assets indicated that most of the asset types were in fair to poor condition. According to Coast Guard officials and documents, the reasons for their condition include the fact that many of the asset types are beyond their expected service lives and the general workload of the assets has increased to carry out other missions, such as maritime security after September 11, 2001, or providing disaster response after events such as the recent hurricanes on the Gulf Coast. Coast Guard personnel reported to us that crew members have had to spend increasingly more time and resources to troubleshoot and resolve maintenance issues on the older ATON and domestic icebreaking assets. The Coast Guard personnel we met with indicated that because the systems and parts are outdated compared with the technology and equipment available today, it can be challenging and time consuming to diagnose a maintenance issue and find parts or determine what corrective action to take. For example, the propulsion control system on the 140-foot icebreaking tugs uses circuit cards that were state-of-the-art when the tugs were commissioned in the late 1970s to 1980s but are no longer manufactured today and have been superseded by computer control systems. According to the Coast Guard personnel we met with, the lack of a readily available supply of these parts has forced maintenance personnel to order custom made parts or refurbish the faulty ones, increasing the time and money it takes to address maintenance problems. The personnel also told us that because such equipment is outdated, finding knowledgeable individuals to identify problems with the equipment is difficult, which further complicates the maintenance of the assets. Crews of other assets we visited also confirmed the difficulty of diagnosing problems and obtaining replacement parts for other critical subsystems such as the main diesel engines. Aware of such issues, the Coast Guard completed a mission needs analysis for ATON and domestic icebreaking assets, and developed an approach to renovate or recapitalize these assets. This analysis, which was completed in 2002, looked at the condition of the existing assets and their ability to support mission needs. The analysis concluded that all of the assets suffered in varying degrees with respect to safety, supportability, environmental compliance, and habitability, and would need replacement or rehabilitation to address these issues. In response to this analysis, the Coast Guard developed a plan to systematically replace or renovate the assets. Program officials at the Coast Guard indicated that current estimates place the total cost to carry out this plan at about $550 million. According to a Coast Guard official, although resource proposals to carry out this project had been made during the budget planning processes for fiscal years 2004, 2005, 2006, and 2007, those proposals were either deferred or terminated by DHS or the Office of Management and Budget from inclusion in the final budget requests. Preliminary observations from our review of the Coast Guard’s polar icebreaking assets revealed similar challenges for the Coast Guard to perform the maintenance needed to sustain the capabilities of these assets. As with the other older ATON and domestic icebreaking assets, the two Polar Class icebreakers that are used for breaking the channel into the Antarctic research station are reaching the end of their design service lives of 30 years. According to Coast Guard officials, the icebreakers’ age combined with recent harsh ice conditions and increased operational tempo have left the Polar Class icebreakers unable to continue the mission in the long term without a substantial investment in maintenance and equipment renewal. These officials also told us that while the hull structures are sound, critical systems such as the main gas turbine controls and the controllable pitch propeller systems have become unreliable. Corroborating this account of the icebreakers’ condition, an interim report issued in December 2005 by the National Research Council of the National Academies also found that the icebreakers have become inefficient to operate because substantial and increasing maintenance is required to keep them operating and that significant long-term maintenance had been deferred over the past several years. Given the age and obsolescence of the Polar Class icebreakers, funding for maintenance and repair has been and will likely continue to be a challenge. Coast Guard officials indicated that the cost of maintenance activity for the icebreakers required that additional funding be transferred from other Coast Guard asset maintenance accounts in previous years in order to carry out this maintenance. For fiscal years 2005 and 2006, the Coast Guard also obtained additional funds for maintenance from the National Science Foundation (NSF). The Coast Guard has considered undertaking a project to extend the service lives of the existing assets by refurbishing or replacing those systems that have reached the end of their service lives. The Coast Guard estimates that this extension project could provide an additional 25 years of service for the existing assets. The cost to carry out this project for both Polar Class icebreakers is estimated between $552 and $859 million. Coast Guard capital planning documentation indicates that failure to fund this project could leave the nation without heavy icebreaking capability and could jeopardize the investment made in the nation’s Antarctic Program. According to Coast Guard officials, the agency has identified these needs but has not yet requested funds in part, because other agencies have taken financial responsibility for funding polar icebreaking assets. While the Coast Guard continues to face the challenge of performing the diverse array of responsibilities associated with its many missions, the fiscal year 2007 budget request includes initial funding for a new Coast Guard responsibility of enforcing a no-fly zone in the national capital region. The scope of the mission—intercepting slow and low flying aircraft—falls outside of the Coast Guard’s typical mission of protecting and preserving the nation’s ports and waterways. According to Coast Guard officials, DHS agreed to this mission through a memorandum of understanding with DOD and subsequently determined that the Coast Guard was the best suited agency within DHS to perform the mission. Coast Guard officials also said, the agency will officially take over these responsibilities from CBP in late fiscal year 2006. However, despite previous experience performing air intercept activities, according to Coast Guard officials, the new homeland security mission has required additional training and assets. The Coast Guard’s $57.4 million fiscal year 2007 budget request, the first year of a planned 2-year project, would provide funding to acquire five of the seven HH-65C helicopters needed for the mission, and, according to Coast Guard officials, update infrastructure at Air Station Atlantic City, as well as upgrade equipment at Reagan National Airport. Officials added that efforts to train Coast Guard pilots have already been underway. While groundwork has been laid through the request of fiscal year 2007 funds to purchase the equipment necessary to carry out this new responsibility, it is likely to require additional personnel and training. Several of the developments we are reporting on today are good news. Despite many demands, the Coast Guard continues to make progress in meeting its performance targets, and its successful search and rescue work in responding to Hurricane Katrina is one positive aspect of what largely otherwise appears to be an ongoing tragedy. Certainly, if one measure of organizational excellence is performance in crisis, Hurricane Katrina shows that the Coast Guard is well along on that scale. Excellence must also be demonstrated in more mundane ways, however, such as how an organization manages its acquisitions. In this case, the Coast Guard needs to consistently, and from the beginning, employ widely known best practices for its acquisition management processes particularly with respect to developing requirements, project and risk management, and ensuring proper executive level oversight. Although the Coast Guard is to be complimented for its willingness to make improvements after our audits have identified problems, such as with the Deepwater program, its acquisition management would be better if the agency employed the lessons once learned and translated them into generally-improved practices. Better overall practices would help to ensure that future projects will not repeat past problems and will be completed on time and at cost. The Coast Guard has clearly been at the vortex of many of the most sweeping changes in the federal government’s priorities over the past several years. “Homeland security” carries a much different tone, as well as budgetary significance, in the national consciousness after September 11, 2001. However, dramatic infusions of money are no guarantee of success; rather they bring added responsibility to ensure that large investments of taxpayer dollars are wisely spent. Our work has shown that the Coast Guard continues to face some challenges in balancing all of its missions and in keeping a sustained focus on managing its significant capital acquisition programs. Continued efforts are needed to sustain the progress that has been made thus far. Madame Chair and Members of the Subcommittee, this completes my prepared statement. I would be happy to respond to any questions that you or other Members of the Subcommittee may have at this time. For information about this testimony, please contact Stephen L. Caldwell, Acting Director, Homeland Security and Justice Issues, at (202) 512-9610, or caldwells@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Joel Aldape, Nancy Briggs, Lisa Canini, Christopher Conrad, Adam Couvillion, Christine Davis, Josh Diosomito, Michele Fejfar, Kathryn Godfrey, Christopher Hatscher, Dawn Hoff, Lori Kmetz, Julie Leetch, Josh Margraf, Dominic Nadarski, Jason Schwartz, and Stan Stenersen. To provide a strategic overview of the President’s fiscal year 2007 budget request for the Coast Guard, we analyzed the Coast Guard’s budget justification and other financial documents provided by the Coast Guard, focusing on several areas of particular congressional interest. We also interviewed Coast Guard headquarters officials familiar with the Coast Guard’s budget and acquisition processes. To report on the Coast Guard’s progress in meeting its performance targets, we reviewed Coast Guard data and documentation addressing the status of performance targets between fiscal years 2002 and 2005. In reporting the performance results, we did not assess the reliability of the data or the credibility of the performance measures used by the Coast Guard. Previous GAO work indicates that the Coast Guard data are sufficiently reliable for the purposes of reporting on general performance, but we have not examined the external sources of data used for these measures. In addition, we are currently involved in ongoing work looking at the reliability of the data and credibility of performance measures for the Coast Guard’s six non-homeland security programs. To determine the status of key outstanding Coast Guard recommendations, we interviewed Coast Guard headquarters officials regarding the status of the recommendations—including any progress made to implement them. We also obtained and reviewed relevant documents from the Coast Guard. To discuss the Coast Guard’s response to Hurricane Katrina, we relied on our ongoing work regarding Hurricane Katrina, with particular focus on the Coast Guard’s preparation, response, and recovery to Katrina with respect to search and rescue, pollution response, and facilitation of maritime missions. To obtain a more detailed understanding of the Coast Guard’s response to Hurricane Katrina, we interviewed officials, reviewed documents, and conducted site visits at two Coast Guard Districts, the Atlantic Command, and Coast Guard headquarters. We also interviewed city and state officials in areas impacted by Hurricane Katrina and assisted by the Coast Guard. To determine the Coast Guard’s progress in implementing our prior recommendations related to its Deepwater program, we drew from ongoing work, which included extensive reviews and analyses of documentation provided by the Coast Guard. We supplemented our document reviews and analyses with extensive discussions with officials at the Deepwater Program Executive Office, as well as with interviews with key Coast Guard operations and maintenance officials, contract monitors, and representatives of the system integrator. To report on the status and cost of Coast Guard’s Rescue 21 program, we drew from our work examining (1) the reasons for significant implementation delays and cost overruns against Rescue 21’s original 2002 proposal; (2) the viability of the Coast Guard’s revised cost and implementation schedule that is projected to reach full operational capability in 2011; and (3) the impact of Rescue 21’s implementation delay upon the Coast Guard’s field units which are awaiting modernization of antiquated communications equipment. This work has involved reviewing acquisition plans, implementation schedules and cost estimates for Rescue 21, as well as documentation regarding problems associated with the antiquated communications equipment. We also interviewed Coast Guard field personnel at units using the antiquated equipment and at the two sites where Rescue 21 has been deployed. We also drew from our ongoing work to report on Coast Guard’s ATON and icebreaking assets. Specifically, this work is examining (1) the recent trends in the amount of time ATON and domestic icebreaking assets have spent performing various missions and the impact of these trends on their primary missions; (2) the condition of the ATON and domestic icebreaking assets and the impact of their condition on performing their primary missions; and (3) the actions the Coast Guard has taken to upgrade or better manage its ATON and domestic icebreaking assets or use alternatives to carry out their missions. While conducting this work, we have interviewed Coast Guard program and maintenance officials at headquarters, area commands, and selected districts to obtain information on the missions these assets carry out, the condition of the assets, and the past and estimated future costs to maintain and deploy them. We also interviewed these officials and reviewed documents about the Coast Guard’s plans to maintain or replace these assets. We also analyzed Coast Guard data from 2000 to 2004 on condition tracking measures, resources spent to operate the assets, and the number of hours the assets spent on Coast Guard missions. Finally, we interviewed crew members of various assets, selected by nonprobability sample—to provide diversity among asset types and locations—to obtain their views on the condition and maintenance of their assets and any impact the assets’ condition may have had on their ability to carry out their missions. This testimony is based on published GAO reports and briefings, as well as additional audit work that was conducted in accordance with generally accepted government auditing standards. We conducted our work for this testimony from July 2005 through May 2006. Appendix II provides a breakdown of the Coast Guard’s fiscal year 2007 budget request. In addition to operating expenses and acquisition, construction, and improvements, the remaining Coast Guard budget accounts include areas such as environmental compliance and restoration, reserve training, and oil spill recovery. (See table 2.) Appendix III provides a detailed list of Coast Guard performance results for the Coast Guard’s 11 programs from fiscal year 2002 through 2005. Shaded entries in table 3 indicate those years that the Coast Guard reported meeting its target; unshaded entries indicate those years that the Coast Guard reported not meeting its target. Each program is discussed in more detail below. U.S. Exclusive Economic Zone Enforcement. The Coast Guard reported that in fiscal year 2005, it met the performance target for U.S. exclusive economic zone enforcement—defined as the number of foreign vessel incursions into the U. S. Exclusive Economic Zone, by detecting 174 foreign vessel incursions, within the performance target of 200 or less incursions. This represents a more than 30-percent decrease in foreign vessel incursions since fiscal year 2004, when the Coast Guard detected 247 incursions. Coast Guard officials attributed this decrease in incursions to many factors, including the agency’s efforts in combating incursions, such as an increased number of air and water patrols, and the likelihood that some Mexican fleets known to cross into U.S. waters were damaged during the 2005 hurricane season. Ice operations. To meet this performance target, the Coast Guard’s ice operations program must keep winter waterway closures to 8 days or fewer for severe winters and less than 2 days per year for average winters. According to Coast Guard documents, the agency met its target for an average winter with 0 days of waterway closures during the 2005 ice season. Search and rescue. The Coast Guard reported that performance in this area, as measured by the percentage of mariners’ lives saved from imminent danger, was 86.1 percent, just above the target of 86 percent for fiscal year 2005. This result is similar to the fiscal year 2004 result of saving 86.8 percent of lives in imminent danger. The Coast Guard identified continuing improvements in response resources and improvements made in commercial vessel and recreational boating safety as the main reasons for continuing to meet the target. Aids to navigation. According to Coast Guard reports, the aids to navigation program performance measure—that is, the 5-year average number of collisions, allisions, and groundings—improved in fiscal year 2005 by dropping to 1,825 incidents from 1,876 incidents in fiscal year 2004. The fiscal year 2005 total was also below the target of 1,831. The Coast Guard attributes this continued decrease to a multifaceted system of prevention activities, including radio aids to navigation, communications, vessel traffic services, dredging, charting, regulations, and licensing. Ports, waterways, and coastal security. In fiscal year 2005, the Coast Guard began using a new measure of program performance—the percent reduction of terrorism-related risk in the maritime environment. According to Coast Guard officials, this measure is based on an assessment of the total amount of maritime risk under the Coast Guard’s authority. At the end of each fiscal year the Coast Guard calculates the amount of this total risk that has been reduced by the program’s activities throughout the fiscal year. Officials added that because of the dynamic and changing nature of risk, the total amount of maritime risk under the Coast Guard’s authority—the baseline level of risk—is recalculated annually. Because this was the first year the agency used the measure, there was no previous performance baseline to establish a numeric annual target. However, according to the Coast Guard, in the absence of a numeric target, the program used, and met a target of fully implementing all planned activities geared toward lowering the risk due to terrorism in the maritime domain. Marine environmental protection. The marine environmental protection measure of performance is the 5-year average annual number of oil and chemical spills greater than 100 gallons per 100 million tons shipped. According to Coast Guard reports, since fiscal year 2002, the reported average number of oil and chemical spills has dropped from 35.1 to 18.5 in fiscal year 2005. The Coast Guard identified its prevention, preparedness, and response programs— including industry partnerships and incentive programs—as reasons for the drop. Marine safety. The marine safety measure—a 5-year average of passenger and maritime deaths and injuries— achieved its fiscal year 2005 performance target of 1,317. During fiscal year 2005 there were 1,311 incidents, a slight increase from 1,299 incidents in fiscal year 2004. Beginning in fiscal year 2006, the Coast Guard will use a revised version of this measure that includes injuries of recreational boaters as well, representing a broader and more complete view of marine safety. Illegal drug interdiction. While complete results for the illegal drug interdiction performance measure—the rate at which the Coast Guard removes cocaine bound for the U.S. via non-commercial maritime transport—are not yet available, the Coast Guard anticipates exceeding the fiscal year 2005 target of removing 19 percent or more of cocaine bound for the U.S. According to Coast Guard officials, in fiscal year 2005 the Coast Guard removed a record 137.5 metric tons of cocaine bound for the U.S. Coast Guard officials believe that this record amount of cocaine removed will result in exceeding the fiscal year 2005 performance target. Final program results are due to be published in July 2006. Defense Readiness. Defense readiness is measured by the percent of time that units meet combat readiness status at a C-2 level. The Coast Guard reported that the overall level of performance for the defense readiness program decreased for the second consecutive year from a high of 78 percent in fiscal year 2003, to 76 percent in fiscal year 2004, and 67 percent in fiscal year 2005. According to Coast Guard officials, this decline in recent years was because of staffing shortages for certain security units within the defense readiness mission. According to Coast Guard officials, the agency intends to solve these staffing problems by offering incentives for participation as well as making participation mandatory instead of voluntary, as it was previously. Living marine resources. The Coast Guard reported that the performance measure for living marine resources—defined as the percentage of fishermen complying with federal regulations—was 96.4 percent, just below the target of 97 percent for fiscal year 2005. This result is similar to the fiscal year 2004 result of 96.3 percent. According to Coast Guard officials, the agency missed the fiscal year 2005 target because of a variety of economic conditions and variables beyond Coast Guard control, such as hurricane damage, high fuel costs, fewer days-at-sea allocations, and lucrative seafood prices in some fisheries—which created greater incentives for fishermen to violate fishery regulations. The Coast Guard conducted 6,076 fisheries boardings in fiscal year 2005, an increase of more than 30 percent since fiscal year 2004. However, it is important to note that the compliance rate is a conservative estimate of agency performance because the Coast Guard targets vessels for boarding, thereby making it more likely that they will find vessels that are not in compliance with fishery regulations. According to Coast Guard officials, a key contributor to targeting vessels is the vessel monitoring system, which has enhanced the agency’s ability to target vessels by providing more timely information. Undocumented migrant interdiction. According to Coast Guard reports, in fiscal year 2005 the Coast Guard did not meet its performance target of interdicting or deterring at least 88 percent of undocumented aliens from Cuba, Haiti, the Dominican Republic, and China attempting to enter the U.S. through maritime routes. The Coast Guard identified 5,830 successful arrivals out of an estimated threat of 40,500 migrants yielding an interdiction and deterrence rate of 85.5 percent, a decrease from the fiscal year 2004 result of 87.1 percent. According to the Coast Guard, program performance decreased because the flow of migrants was higher than in previous years, increasing from almost 22,000 in fiscal year 2002, to more than 40,000 in fiscal year 2005. Coast Guard officials said that the agency is developing a new measure to better account for both the Coast Guard’s efforts and the migrant flow to more accurately report program performance. This new measure will include migrants of all nationalities that successfully arrive in the U.S. through maritime routes. United States Coast Guard: Improvements Needed in Management and Oversight of Rescue System Acquisition. GAO-06-623. Washington, D.C.: May 31, 2006. Coast Guard: Changes in Deepwater Acquisition Plan Appear Sound, and Program Management Has Improved, but Continued Monitoring Is Warranted. GAO-06-546. Washington, D.C.: April 28, 2006. Coast Guard: Progress Being Made on Addressing Deepwater Legacy Asset Condition Issues and Program Management, but Acquisition Challenges Remain. GAO-05-757. Washington, D.C.: July 22, 2005. Coast Guard: Preliminary Observations on the Condition of Deepwater Legacy Assets and Acquisition Management Challenges. GAO-05-651T. Washington, D.C.: June 21, 2005. Maritime Security: Enhancements Made, but Implementation and Sustainability Remain Key Challenges. GAO-05-448T. Washington, D.C.: May 17, 2005. Coast Guard: Preliminary Observations on the Condition of Deepwater Legacy Assets and Acquisition Management Challenges. GAO05-307T. Washington, D.C.: April 20, 2005. Coast Guard: Observations on Agency Priorities in Fiscal Year 2006 Budget Request. GAO-05-364T. Washington, D.C.: March 17, 2005. Coast Guard: Station Readiness Improving, but Resource Challenges and Management Concerns Remain. GAO-05-161. Washington, D.C.: January 31, 2005. Maritime Security: Better Planning Needed to Help Ensure an Effective Port Security Assessment Program. GAO-04-1062. Washington, D.C.: September 30, 2004. Maritime Security: Partnering Could Reduce Federal Costs and Facilitate Implementation of Automatic Vessel Identification System. GAO-04-868. Washington, D.C.: July 23, 2004. Maritime Security: Substantial Work Remains to Translate New Planning Requirements into Effective Port Security. GAO-04-838. Washington, D.C.: June 30, 2004. Coast Guard: Deepwater Program Acquisition Schedule Update Needed. GAO-04-695. Washington, D.C.: June 14, 2004. Coast Guard: Station Spending Requirements Met, but Better Processes Needed to Track Designated Funds. GAO-04-704. Washington, D.C.: May 28, 2004. Coast Guard: Key Management and Budget Challenges for Fiscal Year 2005 and Beyond. GAO-04-636T. Washington, D.C.: April 7, 2004. Coast Guard: Relationship between Resources Used and Results Achieved Needs to Be Clearer. GAO-04-432. Washington, D.C.: March 22, 2004. Contract Management: Coast Guard’s Deepwater Program Needs Increased Attention to Management and Contractor Oversight. GAO-04-380. Washington, D.C.: March 9, 2004. Coast Guard: New Communication System to Support Search and Rescue Faces Challenges. GAO-03-1111. Washington, D.C.: September 30, 2003. Maritime Security: Progress Made in Implementing Maritime Transportation Security Act, but Concerns Remain. GAO-03-1155T. Washington, D.C.: September 9, 2003. Coast Guard: Actions Needed to Mitigate Deepwater Project Risks. GAO-01-659T. Washington, D.C.: May 3, 2001. Coast Guard: Progress Being Made on Deepwater Project, but Risks Remain. GAO-01-564. Washington, D.C.: May 2, 2001. Coast Guard: Strategies for Procuring New Ships, Aircraft, and Other Assets. GAO/T-HEHS-99-116. Washington, D.C.: Mar. 16, 1999. Coast Guard’s Acquisition Management: Deepwater Project’s Justification and Affordability Need to Be Addressed More Thoroughly. GAO/RCED-99-6. Washington, D.C.: October 26, 1998. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Coast Guard's fiscal year 2007 budget request total $8.4 billion, an increase of 4 percent ($328 million) over the approved budget for fiscal year 2006 and a slowing of the agency's budget increases over the past 2 fiscal years. This testimony, which is based on both current and past GAO work, synthesizes the results of these reviews as they pertain to meeting performance goals, adjusting to added responsibilities, acquiring new assets (especially the Deepwater program--to replace or upgrade cutters and aircraft, and the Rescue 21 program--to modernize rescue communications), and meeting other future challenges. According to the Coast Guard, the agency's fiscal year 2005 performance, as self-measured by its ability to meet program goals, was the highest since the terrorist attacks in September 2001. Even with the need to sustain new homeland security duties, respond to particularly destructive hurricanes, and cope with aging assets, the Coast Guard reported meeting or exceeding performance targets for 7 of 11 mission programs, and it anticipates meeting the target for 1 more program once final results for the year are available. In particular, based on our discussions with Coast Guard and other officials, as well as our review of pertinent documents, the Coast Guard's response to Hurricane Katrina highlighted three elements key to its mission performance: a priority on training and contingency planning, a flexible organizational structure, and the agency's operational principles. Three organizational changes appear to be helping the Coast Guard adjust to added responsibilities. First, according to agency officials, a realigned field structure will allow local commanders to manage resources more efficiently. Second, according to the Coast Guard, a new response team for maritime security is expected to provide greater counterterrorism capability. Finally, new and expanded partnerships inside and outside the federal government have the potential to improve operational effectiveness and efficiency. While some progress in acquisition management has been made, continued attention is warranted. Within the Deepwater program, additional action is needed before certain past recommendations can be considered as fully implemented. Also, the program recently had difficulties in acquiring Fast Response Cutters to replace aging patrol boats. For the Rescue 21 program, deficiencies in management and oversight appear similar to those that plagued the Deepwater program, leading to delays and cost overruns, and demonstrating that the Coast Guard has not translated past lessons learned into improved acquisition practices. Two additional future challenges also bear close attention: deteriorating buoy tenders and icebreakers that may need additional resources to sustain or replace them, and maintaining mission balance while taking on a new homeland security mission outside the agency's traditional focus on the maritime environment. |
Since the 1940s, VA has provided vocational rehabilitation assistance to veterans with service-connected disabilities. In 1980, Congress enacted the Veterans’ Rehabilitation and Education Amendments, which mandated a change in the mission of VA’s vocational rehabilitation program from primarily providing training to helping veterans find and maintain employment, or achieve independence in their daily lives if employment is not currently feasible. VA reported that VR&E served 90,600 participants in fiscal year 2007 at a cost of $722 million. There are 57 VA regional offices—roughly about 1 in each state—and about 1,000 VR&E staff who work in these regional offices and at the program’s central office in Washington, D.C. VR&E regional office personnel include rehabilitation counselors, employment coordinators, and management and support staff who provide personal, face-to-face services to veterans. VR&E services can include vocational counseling, vocational evaluation, case management, education and training, job placement assistance, and independent living services. VR&E can also pay tuition, subsistence, and other expenses for veterans pursuing education and training. An allowance is provided to veterans who have completed training programs for up to 2 months as they seek employment. When necessary, VR&E can also direct veterans to other vocational and employment counselors and specialists who perform services under contract. To receive VR&E services, veterans with disabilities generally must have a 20 percent disability rating and an employment handicap. Veterans with a 10 percent disability rating may also be entitled to receive services if they have a serious employment handicap. In addition, injured servicemembers may be eligible for VR&E services before being discharged from the military if they request a memorandum rating from VA and are found to have one or more service-connected disabilities that are 20 percent or higher. VR&E vocational rehabilitation counselors determine entitlement to services, which generally provides a 12-year period of eligibility and up to 48 months of benefits. VR&E is one of many federal and state programs available to veterans with disabilities in their transition from the military to civilian life and work. Injured servicemembers can receive medical treatment from the Department of Defense (DOD) military treatment facilities or Veterans Health Administration facilities, such as polytrauma centers, which may also provide vocational rehabilitation services. Within VA, the Compensated Work Therapy program primarily helps veterans with mental health diagnoses by integrating vocational rehabilitation into their overall medical treatment plan and placing them in jobs. In addition, VA works with DOD and the Department of Labor (Labor) to provide presentations to servicemembers being discharged about veterans’ benefits and services through the Transition Assistance Program and Disabled Transition Assistance Program. Labor’s Veterans’ Employment & Training Service (VETS) also provides services to veterans. Labor and VA have historically worked together to help veterans with service-connected disabilities transition to the civilian workforce. Labor administers the VETS program through grants to state workforce agencies, whose staff provide veterans with reemployment services, such as job search and placement assistance and also market veterans to employers. In addition, VR&E works with state vocational rehabilitation agencies that receive grants from the Rehabilitation Services Administration at the Department of Education to prepare individuals with disabilities for employment through vocational rehabilitation services. For more than 25 years, we, along with others who have reviewed the program, veteran service organizations, and VA, have found shortcomings in the VR&E program. These reviews generally concluded that the program had not fulfilled its primary purpose, which is to ensure that veterans obtain suitable employment. In 1996, we reported that the program primarily emphasized providing training and did not place enough emphasis on providing employment services. Additionally, the 1999 Congressional Commission on Servicemembers and Veterans Transition Assistance found that VR&E had not achieved its statutory purpose and noted that “employment assistance is the most valuable service the Nation can provide to personnel transitioning from active duty to the civilian workforce.” In 2003, we designated federal disability programs, including those at VA, as high risk because they had difficulty managing their programs and were in need of transformation. In response to recommendations from the 2004 Task Force, VR&E has implemented the Five-Track Employment Process and strengthened the program’s focus on employment. However, VR&E’s incentive structure for veterans remains primarily aligned with education and training programs, with no financial incentive for those seeking immediate employment. In response to the 2004 Task Force report, VR&E implemented the Five- Track process by delineating its existing services into five distinct tracks to provide a stronger focus on employment early in the rehabilitation process. The delineation of program services into five tracks is designed to accommodate the different needs of veterans, such as those who need immediate employment as opposed to those who need training to meet their career goal. Figure 1 provides details on each of the five tracks. After veterans apply to the program and are found eligible for services, they are introduced to the Five-Track process through a program orientation. During orientation, VR&E shows a video that explains the process to veterans and emphasizes that the goal of the program is to obtain employment or to achieve independent living if employment is not immediately feasible. At the sites we visited, we found that VR&E staff also verbally reinforced to veterans during orientation that the primary goal of the program is employment. Following orientation and evaluation, veterans are assisted by VR&E staff in selecting a track that meets their needs and employment goals. Some of the rehabilitation counselors we interviewed told us the factors they consider when evaluating veterans for track selection include veterans’ transferable job skills, results on various vocational tests, and how the veterans’ disabilities affect their ability to do the work they did in the past. Of the almost 24,000 veterans with a documented track selection who began a plan of services from January 2007 to early May 2008, we found that more than three-quarters chose to pursue employment through the long-term services track, which includes education and training, while less than one-tenth chose more immediate employment through the reemployment or rapid access to employment tracks, and slightly more than one-tenth entered the independent living track. Very few veterans chose self-employment (less than 1 percent). See figure 2 for the percentages of veterans who entered each of the five tracks. lso as part of the Five-Track process, VR&E established an employment A coordinator position and job labs to assist veterans with preparing for andfinding employment. Employment coordinators assess veterans’ readiness to seek employment, develop relationships with employers, and help place veterans in jobs. VR&E’s job labs provide computers with employment- related software that VR&E staff and veterans can use for activities such as developing job search plans, preparing for interviews, and writing resumes. Though the Five-Track Employment Process was intended to modernize the program and increase VR&E’s emphasis on employment, VR&E did no update its financial incentive structure to align with its mission. Specifically, the program offers a monthly subsistence allowance those veterans who are enrolled in education or training, but not to those who receive employment services only. For example, a veteran who has two dependents and enrolls in a full-time education or training program receives approximately $760 in monthly assistance. That veteran would continue to receive this allowance for 2 months following training while or she seeks employment. One rehabilitation counselor we spoke with noted that many veterans who have completed their training rely on this money during the job search phase. In contrast, veterans who receive employment services only do not receive a monthly allowance while th look for employment. Our prior work has noted the need to consider basic program design, particularly those features that affect individual work incentives and supports, when modernizing disability programs for the 21st century. Based on our prior work, we are concerned that without properly aligned incentives and supports, veterans who need assistance finding immediate employment may not seek out VR&E services and others may not choose the track that is best suited for them. In our discussions with senior VR&E officials, they acknowledged that offe ring financial incentives for veterans receiving employment services could be beneficial, and noted that they may review the internal incentive structure ey as part of a program evaluation in fiscal year 2009. Additionally, in September 2008, VA released a study on overall veterans’ compensa payments that included several options for changing the subsistence allowance for VR&E participants. to align incentives with the program’s employment mission. Over the last few years, VR&E has increased its capacity to serve veterans by engaging in a number of collaborative initiatives with other organizations and by adding staff to its central and regional offices. Nevertheless, the program continues to face challenges ensuring it h right number of staff with the right skills, and its workforce planning has not strategically addressed these issues. VR&E has increased its collaboration with other organizations such as federal and state agencies, as well as private and nonprofit employers through initiatives to help injured servicemembers and disabled veterans transition to the civilian workforce. Initiatives with DOD focus on intervention and employment services for injured servicemembers their recovery process, while VR&E’s partnership with VA’s Compensated Work Therapy (CWT) program addresses the vocational rehabilitation needs of veterans who may have mental illnesses or traumatic brain inju In addition, VR&E’s collaborative efforts with Labor, state vocational rehabilitation agencies, and employers provide employment services to veterans who are ready to enter the job market. VR&E’s recent efforts to collaborate with these organizations are highlighted below. ry. Economic Systems Inc., A Study of Compensation Payments for Service-Connected Disabilities, a special report prepared at the request of the Department of Veterans Affairs, September 2008. In fiscal year 2005, VR&E created a standardized presentation for the Disabled Transition Assistance Program (DTAP), which informs disabled servicemembers of the full range of benefits and services available to them once they leave active duty. VR&E has assigned rehabilitation counselors or contractors to present this information at DOD installations and military treatment facilities. According to a senior VR&E official, VR&E is also increasing outreach to National Guard and Reserve servicemembers by providing information about this DTAP briefing at required post- deployment health assessments. In fiscal year 2007, VA and DOD began to share information earlier about seriously injured servicemembers, and VR&E now has access to a database that allows it to identify and locate them to facilitate early outreach. In fiscal year 2008, VR&E rolled out the Coming Home to Work (CHTW) initiative nationwide. This key component of VR&E’s early intervention efforts provides counseling to individuals on active duty pending medical separation and rehabilitation services to eligible servicemembers. According to officials, VR&E has placed 13 full-time rehabilitation counselors at 12 military treatment facilities to administer this program and initiate early contact with injured servicemembers. In addition to these 13 counselors, VR&E has designated one staff member in each regional office as the program coordinator. As of August 2008, over 4,000 servicemembers had received counseling through CHTW and 149 servicemembers who received rehabilitation services had obtained employment, according to VR&E officials. In another effort to provide services to seriously injured veterans early in their treatment process, VR&E has taken steps to develop a partnership with the CWT program at VA. The CWT program works primarily with veterans that many VR&E regional officials said their staff had difficulty serving. Such veterans might have a traumatic brain injury or mental health diagnosis, or may need more intensive support in the structured environment CWT provides. CWT’s early intervention model addresses both employment goals and medical rehabilitation needs. Also, veterans receiving services simultaneously from VR&E and the CWT program can continue to receive services from CWT even after VR&E education and training benefits are exhausted, according to officials. The 2004 Task Force noted the potential advantages of increased collaboration between VR&E and CWT. According to officials from both programs: VR&E refers veterans to the CWT program. Regional officials at the four sites we visited said their staff refer veterans to this program when it is appropriate. VR&E and CWT briefed each other’s staff at their national training conferences in fiscal year 2008. VR&E plans to provide a 1-hour training session for VR&E staff on the CWT program via satellite broadcast in fiscal year 2009. The Department of Labor is VR&E’s primary employment services partner, and an effective relationship between these agencies is important in giving disabled veterans the best chance for successful outcomes. Recent collaborative efforts include the following: In fiscal year 2006, VR&E and Labor renewed their existing agreement to improve employment services to veterans with disabilities. In fiscal year 2006, Labor and VR&E implemented some elements of their renewed agreement by establishing a joint work group at the national level to develop a set of shared performance measures. In fiscal year 2008, Labor and VR&E completed a demonstration project at eight regional offices to develop and test joint performance measures, tracking systems, and training curriculums for their staff who provide employment services to veterans. The 2004 Task Force highlighted the importance of collaboration between VR&E and state vocational rehabilitation agencies, noting that state vocational rehabilitation agencies have established extensive employer networks and could provide veterans with greater access to employment opportunities. In addition to these increased employment opportunities, agency officials also noted that close relationships between VR&E and these agencies could result in joint rehabilitation plans that can provide complementary services to veterans. For example, veterans who are jointly served by VR&E and a state vocational rehabilitation agency have access to more and different services, such as transportation assistance or a clothing allowance provided by state agencies, which may make the difference in a veteran’s ability to achieve rehabilitation and employment goals. According to officials, recent collaborative efforts with state vocational rehabilitation agencies have included the following: In fiscal year 2004, VR&E and the Council of State Administrators of Vocational Rehabilitation (CSAVR), a professional association of state vocational rehabilitation administrators, formally agreed to facilitate local cooperative agreements between state vocational rehabilitation agencies and VR&E regional offices. The purpose of these local agreements is to encourage collaboration that will result in improved services and increased employment outcomes for disabled veterans. In fiscal year 2008, the central office staff of VR&E and CSAVR exchanged local office contact information. In fiscal year 2008, VR&E and state vocational rehabilitation officials briefed each other’s staff at national conferences. VR&E has established national agreements with several private, public, and nonprofit employers to further increase employment opportunities for veterans. These agreements focus on joint efforts to provide career opportunities to veterans exiting the VR&E program. VR&E central office officials said that they inform the regional offices of new national agreements via monthly conference calls and disseminate copies of the agreements. Finally, a senior VR&E official said that the program currently coordinates individually, as opposed to jointly, with its various partners—DOD, VA’s CWT program, Labor, and state vocational rehabilitation agencies. This official also noted that VR&E had recently contributed to a forthcoming report on strategies for building capacity and tools for improving coordination among federal and state agencies, including several listed above. The report is expected to identify promising practices for addressing gaps in services. VR&E has increased staffing at its central and regional offices as recommended by the 2004 Task Force. Specifically, VR&E officials said they increased central office staff by 67 percent, from 33 in fiscal year 2004 to 55 in fiscal year 2008, to address the concern that the central office needed more resources to provide policy, procedures, and staff training to the regional offices. At the four sites we visited, some regional office staff said support and training from the central office had improved. VR&E also increased its regional office staff by 20 percent, from 917 in fiscal year 2004 to 1,101 in fiscal year 2008. A senior VR&E official said these new regional office staff include contracting specialists and counselors, as well as positions to provide outreach to veterans returning from the wars in Afghanistan and Iraq. Despite these staff increases, the VR&E regional offices still reported staff and skill shortages on our survey. In terms of staff shortages, more than half of all 57 regional offices said they have fewer counselors than they need and more than a third said they have fewer employment coordinators than they need (see fig. 3). Some employment coordinators we interviewed told us it is difficult for them to provide services to veterans and reach employers throughout their entire regions, including those in more rural locations. Exacerbating these staff shortages is the fact that staff time may not be used efficiently as many regional office staff we interviewed and surveyed said much of their time was spent on redundant paperwork and data entry requirements that reduced the amount of time they spent with veterans. In terms of skill shortages, almost one-third of the regional offices reported that the skills of their counselors no more than moderately meet the needs of the veterans they serve and almost one- third reported the same for their employment coordinators. Moreover, 80 percent of offices said VR&E was somewhat or less prepared to meet the needs of veterans in the future, and, of these, 12 percent reported VR&E was unprepared. We found that these workforce problems were not being addressed with some of the strategic planning practices that our prior work has identified as essential, such as: Using data to identify current and future human capital needs including the appropriate number of employees, how they are deployed across the organization, and existing opportunities to reshape the workforce by improving current work processes; and Determining the critical skills and competencies staff will need to successfully achieve the organization’s mission and goals, especially as various factors change the environment in which the organization operates. VR&E has not gathered data to identify the number of staff it currently needs. The 2004 Task Force recommended a study of the time required for key tasks and VR&E identified the need for such a study in its fiscal year 2005 - 2008 workforce plan; however, the study has not yet been conducted. While VR&E officials told us they have plans to fund the study in fiscal year 2009, they acknowledged that without such information they do not know whether their current caseload target is appropriate. Moreover, without knowing what their target caseload should be, VR&E cannot know the total number of counselors the program needs. VR&E officials said the current caseload target, which is one counselor for every 125 veterans, is based on a study of the state vocational rehabilitation programs, not VR&E’s own workloads. Nevertheless, the state study concluded that a caseload of this size would leave counselors little time to spend with clients. We learned from our survey of VR&E regional offices that their estimated average caseload was one counselor for every 136 veterans. In addition, the program has not studied its work processes since the roll- out of the Five-Track process to determine whether and how to streamline administrative activities to allow staff to use their time more efficiently. Many survey respondents, as well as staff we interviewed, reported that administrative paperwork was cumbersome and labor intensive. According to staff at one regional office, paperwork requirements were a concern when the Five-Track process was rolled out, but documentation requirements did not ultimately change and new paperwork was added. At another regional office, a staff member noted that the decision regarding a veteran’s entitlement to services had to be documented multiple times. A VR&E central office official said the program is working to transition to one database, which will reduce redundant data entry. Additionally, the official said that while new forms had been added to ensure consistent documentation across all regional offices, these requirements will be reviewed as part of the fiscal year 2009 study of counselors’ key tasks. VR&E also does not use relevant data to identify future staffing needs. While a VR&E official said that the program considers potential factors such as the impact of the wars in Afghanistan and Iraq, the only data used to project future workloads and staff needs are the program’s historical participation rates. Moreover, while VR&E does review the numbers of new disability claims, this official said these numbers are not formally factored into its projections nor does the program routinely determine what proportion of this population subsequently applies for VR&E services or when they apply. We found a decrease in the average number of years between a veteran receiving an initial disability rating and applying for VR&E services from 7.9 years in fiscal year 2002 to 6.1 years in fiscal year 2007. A VR&E official said this decrease is expected due to the program’s increased outreach to servicemembers and veterans. VR&E officials said that their past workload projections had not been far off and, according to VR&E data, since 2004 their projections have been within 8 percent of actual program participation. However, new factors may be impacting enrollment because in fiscal year 2008 the program underestimated the number of program participants for the first time in several years. Further, VR&E staffing projections do not account for the numbers of veterans whose status will likely require more staff time, such as veterans who need an extended evaluation to determine if employment is currently feasible. Staff are allocated to the regional offices based, in part, on the number of veterans whose status will likely take more of a counselor’s time. However, when VR&E prepares its annual budget request for staff, it considers only total program participants and does not take into consideration the growing number of cases that require more staff time due to their complexity. Yet, since the wars began in Afghanistan and Iraq, the number of veterans who required an extended evaluation increased by 121 percent. While a senior VR&E official said the model for projecting the program’s overall staff needs is not intended to be the same as the one for allocating staff to regional offices, a senior VA official acknowledged that VR&E could improve its workload management with better projections. In addition, VR&E officials said they have not fully determined the critical skills and competencies needed by counselors and employment coordinators to achieve the program’s goals. While officials in 2003 conducted an analysis of job duties and associated tasks for counselors, this was not an analysis of the skills and competencies required to perform those tasks or the skills that might be needed in the future. Determining the relevant skills and competencies that counselors and employment coordinators need may be particularly important now, given the changing needs of veterans. About 90 percent of the regional offices we surveyed reported that their caseloads have become more complex since veterans began returning from Afghanistan and Iraq. They reported dealing with multiple physical injuries as well as traumatic brain injury and post-traumatic stress disorder among veterans returning from war. One official noted that, while her staff are skilled, they are not experts in traumatic injuries and psychiatric conditions, and could benefit from additional training in these areas. VA performance and budget reports lack important information about the outcomes of the VR&E program. VA does not report specific performance information for the two different groups of veterans VR&E serves—those seeking employment and those seeking to live independently. In addition, it has not adequately disclosed a change to its primary performance measure. These omissions could lead to some misinterpretation of the program’s performance. Although the VR&E program works with two different groups of veterans, most of whom are focused on employment with a smaller number seeking independent living, VA reports an overall rehabilitation rate for all participants. We found that this single measure masks the individual outcome for each group of participants and may hinder oversight. For example, VA reported a rehabilitation rate of 76 percent in fiscal year 2008. When we computed the rates for each group of veterans we found that 73 percent of those seeking employment were successful, while 92 percent seeking independent living were successful (see fig. 4). Information on separate success rates would result in better information for Congress and others to evaluate program performance and target services. For example, reporting separate rates would show that those participants seeking employment—the majority of people in the program—have a lower success rate than the overall rate currently reported. Likewise, information on separate success rates would enable those overseeing the program to understand that the minority of participants seeking independent living have a much higher success rate than the reported overall rate. Both the Task Force and VA’s Office of Inspector General (OIG) have also noted the need for separate employment and independent living measures. The Task Force recommended the use of separate outcome measures because very different services are often required to serve those seeking employment versus those seeking independent living. For example, veterans seeking employment may need career training and placement, while veterans trying to live independently may need to learn to use a wheelchair or communicate with an assistive device. VR&E officials did not implement the recommendation because, according to officials, the existing rehabilitation rate reflected the outcomes of all veterans in the program. For its part, VA’s OIG specifically recommended in 2007 that VA performance and accountability reports include the numbers of veterans who achieve employment and independent living, given that such outcomes are used for budget and resource allocation and in testimony to Congress. In its 2007 Performance and Accountability Report, VA provided the absolute numbers of veterans who had found employment (8,252) or achieved independent living (2,756), but did not offer a separate rate for each program goal, which would have allowed for a better assessment of VR&E’s progress. During our review, a senior VR&E official acknowledged the merit of examining separate employment and independent living rates and said that the program had recently begun internally tracking separate rates. Another VR&E official told us that the program is considering developing and reporting separate performance measures for independent living and employment, but did not have a specific time frame for when that decision will be finalized. In fiscal year 2006, VR&E changed its rehabilitation performance measure—the way it calculates the overall rehabilitation rate—without adequately disclosing this change in several subsequent reports even though the change substantially increased the rate. VA noted the change in its fiscal year 2006 Performance and Accountability Report, but did not do so for its subsequent fiscal year 2007 and fiscal year 2008 Performance and Accountability Reports, or for its fiscal year 2008 and fiscal year 2009 budget submissions to Congress. These reports included tables and graphics showing a 10-point increase in the rehabilitation rate from fiscal year 2005 to fiscal year 2006. While federal agencies may change their performance measures, we believe that not acknowledging the change in subsequent reports could allow for some misinterpretation of the program’s performance over time. Our prior work on federal performance measures found it useful to acknowledge such a change to provide a complete picture of program performance. Prior to fiscal year 2006, VR&E calculated the rehabilitation rate by comparing the number of veterans who had a rehabilitation plan and achieved their goal with the total number of veterans who had a rehabilitation plan and either achieved their goal or discontinued the program. In fiscal year 2006, VR&E began excluding from the total those veterans who discontinued from the program for reasons considered beyond VR&E’s control (see fig. 5). Specifically, VR&E excludes veterans from the calculation who accept a position incompatible with their disability; those they consider employable, but who are no longer seeking employment; and those they consider unemployable due to medical or psychological reasons. Prior to the calculation change, VR&E was having limited success improving its rehabilitation rate and achieving its performance goals (see fig. 6). Changing the calculation enabled VR&E to show a 14-point increase (from 62 percent in fiscal year 2004 to 76 percent in fiscal year 2008) in the rehabilitation rate trend in its fiscal year 2008 Performance and Accountability Report. According to our analysis, the increase would have been 6 points (from 62 percent in fiscal year 2004 to 68 percent in fiscal year 2008) without a change to the performance measure. Furthermore, the calculation change enabled VR&E to meet its annual performance goal in fiscal years 2006, 2007, and 2008. We are concerned that this performance data, as currently reported without an explanation of the calculation change, could convey a misleading picture of the program’s performance over time. For more than 20 years, VR&E has sought to modernize its program and meet its employment mandate. VR&E launched its new Five-Track Employment Process to better focus on employment; however, critical aspects of the program have not been aligned with the employment mission. Given the current incentive structure, veterans who most need immediate employment services, but could also benefit from some level of financial assistance, may be at a disadvantage. Moreover, the incentive structure may result in some veterans not choosing the track that is best for them and, therefore, foregoing early integration into the civilian workforce. VR&E has improved its capacity to serve veterans by stepping up its collaboration with other organizations and by adding staff. However, the lack of information about staffing needs could limit VR&E’s ability to provide quality services to veterans returning from the wars in Afghanistan and Iraq, as well as to veterans from prior conflicts. Without a strategic workforce planning process that collects and uses relevant data to ensure the right number of staff with the appropriate skills, the VR&E program will continue to face challenges serving current veterans and could fall short in responding to the needs of future veterans. Finally, the lack of transparency in how VA calculates and reports program performance is detrimental to effective oversight and VR&E’s ability to manage the program. Without transparency in program outcomes and how performance measures are calculated, Congress and other stakeholders lack important information that highlights the program’s successes and focuses their attention on its shortcomings. In addition, VA officials lack essential information to manage and make adjustments to the program. To ensure VR&E’s employment mission is fully supported, we recommend that the Secretary of Veterans Affairs direct VR&E to consider cost- effective options for better aligning the program’s financial incentives with its employment mission. To ensure that the current and future needs of veterans are met, we recommend that the Secretary of Veterans Affairs direct VR&E to engage in a strategic workforce planning process that collects and uses relevant data, such as information on the appropriate counselor caseload and the critical skills and competencies needed by staff. To increase transparency in VR&E performance and budget reports, we recommend that the Secretary of Veterans Affairs take actions such as separately reporting both the annual percentage of veterans who obtain employment and the percentage of those who achieve independent living, and fully disclosing changes in performance measure calculations when reporting trend data in key performance and budget reports. We provided a draft of this report to VA for review and comment. The agency provided written comments, which are reproduced in appendix II. VA generally agreed with our recommendations and noted the steps it will take to act on them: In response to our recommendation that VR&E consider cost-effective options for better aligning the program’s financial incentives with its employment mission, VA agreed and stated that the current law does not permit payments of subsistence allowance to veterans receiving only employment services. Therefore, to address this issue, VR&E has drafted a legislative proposal for consideration by the Secretary of Veterans Affairs. In response to our recommendation that VR&E engage in a strategic workforce planning process that collects and uses relevant data, such as information on the appropriate counselor caseload and the critical skills and competencies needed by staff, VA agreed and outlined its plans to implement the recommendation. With regard to collecting and using information on the appropriate counselor caseload, VA stated that it plans to complete a study by the end of fiscal year 2010 that will help it determine the staffing levels necessary to comprehensively meet veterans’ rehabilitation needs. With regard to collecting and using information on the critical skills and competencies needed by staff, VA noted that it has already defined the critical skills and competencies needed for VR&E counselors by requiring them to hold a master’s degree in rehabilitation and has provided training to VR&E staff. While we acknowledge the value of these efforts, the fact that many regional offices reported skill shortages on our survey indicates that more needs to be done in this area, especially given the increasingly complex needs of the veterans now applying for services. VA did agree to conduct a skills assessment survey of VR&E staff and indicated that the survey will determine the skills staff currently possess as well as the skills staff need to successfully serve veterans. Additionally, VA agreed to ensure staff training is targeted to the specific skills and competencies identified on the survey. In response to our recommendation that VA separately report the annual percentage of veterans who obtain employment and the percentage of those who achieve independent living and fully disclose changes in performance measures, VA agreed and stated that it will include employment and independent living rates in the comments of its fiscal year 2010 budget and fiscal year 2009 Performance and Accountability Report and will implement separate performance measures in fiscal year 2010. Additionally, VA stated that it would note the year the rehabilitation rate calculation changed in future budget and performance and accountability documents. We are sending copies of this report to the Secretary of Veterans Affairs, relevant congressional committees, and other interested parties. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. A list of related GAO products is included at the end of this report. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our review examined (1) how the implementation of the Five-Track Employment Process has affected the Vocational Rehabilitation and Employment (VR&E) program’s focus on employment, (2) the extent to which VR&E has taken steps to improve its capacity, and (3) how program outcomes are reported. To address these objectives, we: reviewed agency documents and relevant recommendations from key reports, such as the 2004 VR&E Task Force; analyzed data from the Department of Veterans Affairs’ (VA) Corporate WINRS and Benefits Delivery Network (BDN) data systems; interviewed VA and VR&E staff knowledgeable about VR&E planning and operations, and others such as disability experts, members of the 2004 Task Force, veteran service organization representatives, and staff from agencies and organizations that collaborate with VR&E; visited four VA regional offices and conducted interviews with VR&E officers and staff to observe and gather information on workforce planning and how services are provided to veterans; and conducted a survey of VR&E officers at all 57 regional offices to follow up on several key issues relevant to our research objectives. We conducted this performance audit from July 2007 to January 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To evaluate our objectives, we reviewed agency documentation and prior evaluations of the VR&E program and recommendations made by the 2004 VR&E Task Force, key commissions, the VA Office of Inspector General, as well as our own previous work. To evaluate how the Five-Track Employment Process has affected VR&E’s focus on employment and the extent to which VR&E has taken steps to improve its capacity, we identified key recommendations from the 2004 Task Force report by reviewing and selecting recommendations related to the following areas: program focus on employment; workforce and workload management; collaboration with outside agencies and organizations; and performance measures. We assessed VR&E’s implementation of completed recommendations and reviewed recommendations it had not yet completed. We also referred to our previous work on strategic workforce planning and the Office of Personnel Management’s Human Capital Assessment and Accountability Framework. To evaluate reports on VR&E’s program outcomes, we reviewed recent agency performance data in VA’s fiscal year 2006, 2007, and 2008 annual performance and accountability reports and congressional budget submissions for fiscal years 2008 and 2009. We used data from VA’s Corporate WINRS case management system and its BDN system to evaluate the number of veterans in each case status, the number of veteran’s enrolled in each of the five tracks, the amount of time between veterans receiving an initial disability rating and applying for VR&E services, and VR&E program outcomes reporting. To evaluate the number of veterans in each case status over time, we used BDN fiscal year-end national reports from fiscal year 2001 to fiscal year 2007 to capture changes since the beginning of the conflicts in Afghanistan and Iraq. We also analyzed BDN and Corporate WINRS data to determine the change in the average length of time between a veteran receiving an initial disability rating and applying for VR&E services from fiscal year 2002 to fiscal year 2007. We began our analysis with fiscal year 2002 because an agency official told us that regional office data were uploaded into the Corporate WINRS database in fiscal year 2001 making data prior to fiscal year 2002 less reliable. For performance outcomes reporting, we analyzed data from fiscal year 2004 through fiscal year 2008, as these were the years of data reported in the agency’s fiscal year 2008 Performance and Accountability Report. To assess the reliability of this data, we performed the following steps: (1) reviewed the existing information about the data and the system that produced them, (2) observed data entry and reviewed input controls, (3) performed electronic testing of required data fields, and (4) interviewed agency officials knowledgeable about the data and systems. For BDN data, we also reviewed the programming logic that was used to produce selected workload data and applied the same logic contained in the programming against a file of raw data. We were able to replicate two workload indicators that we chose to examine. This gave us reasonable assurance that the automated BDN reports were reliable. Agency officials said there are two sources of data that contain information about VR&E participant case histories. This information is contained in both the BDN and in Corporate WINRS. Corporate WINRS is the interface that VR&E counselors use and that data updates BDN data in most cases. To determine rehabilitation rates, VR&E uses three variables indicating whether a case is rehabilitated, discontinued, and/or has achieved a maximum rehabilitation gain (MRG). These three designations are derived for each VR&E applicant based on Corporate WINRS case history and then stored in a summary file. This summary data is then used to calculate rehabilitation rates. We usually choose to examine raw data instead of summary data. In this case, an ideal test would be to examine the raw Corporate WINRS data and see if we came up with the same designations evidenced in this summary level data. However, complexities associated with the business rules used to establish the key designations in the summary data (as rehabilitated, discontinued, and/or MRG) prevented us from calculating the rehabilitation rate using the full case history data. For this reason, we requested that VR&E provide us the summary data that it used to calculate its rehabilitation rate. We then used this summary data to verify its rehabilitation rate reports and to calculate (1) the success rates of veterans who had a plan to achieve independent living or had a plan to become employed and (2) how the agency would have performed if it had not changed its rehabilitation rate calculation. To verify the summary data, we discussed with agency officials the algorithms they used to create the case-level summary data. In addition, we drew a random sample of 65 summary data records and looked at the raw case history data for each to see if the designations contained in the summary data complied with the algorithms VR&E described. During this examination, we found one case where the raw data did not support the summary-level data designation. This allowed us to conclude with 95 percent confidence that these problems represent no more than a 7.1 percent rate of error in the summary data. In addition, although the Corporate WINRS data for this case did not have the correct reason code to support the MRG designation, an examination of BDN data (the alternate data source that contains participant case information) did contain the correct reason code and supported the MRG designation. Based on our assessment, we determined that the Corporate WINRS data used were sufficiently reliable for the purposes of this report. To assess the capacity of the regional offices, we conducted site visits to four of VA’s regional offices—Houston, Tex.; Pittsburgh, Pa.; Seattle, Wash.; and St. Petersburg, Fla. We also visited four satellite offices, three that serve more rural areas, in Erie, Pa.; Spokane, Wash.; and Lewiston, Idaho; and one serving a more metropolitan area, Tacoma, Wash. At each of the regional offices, we interviewed the VR&E officer, assistant VR&E officer (in regional offices that had an assistant), rehabilitation counselor supervisors (in regional offices that had supervisors in addition to the VR&E officer), vocational rehabilitation counselors, employment coordinators, and local veteran service organization representatives. We also observed the program orientation provided to new veterans applying for VR&E services and conducted a file review of cases randomly selected for the regional offices’ local quality assurance review. We selected our site visit locations to ensure representation from each of VA’s four geographic areas. We also selected our sites to ensure diversity in the following factors: (1) proximity to major military installations, (2) number of program participants, (3) change in the number of participants over time, and (4) overall performance scores on various management reports. To gather information about the program’s workload and its current capacity to help veterans obtain employment, we conducted a survey of all 57 VR&E regional offices from May 2, 2008, to May 15, 2008. Specifically, we collected information on each VR&E regional office’s average counselor caseload, number of staff and their skills, extent of contracting or partnerships with other agencies, changes in the complexity of staff caseloads since veterans began returning from Afghanistan and Iraq, changes in VR&E services since the 2004 Task Force report was issued, and VR&E’s preparation to meet future demand. We developed the content of our survey based on key areas of concern of the 2004 Task Force and issues raised by agency officials on our site visits. Officials at VA’s Office of Field Operations electronically distributed the survey on our behalf; however, all survey responses were sent directly to us. We had a response rate of 100 percent. Since we surveyed all regional offices, there is no sampling error. However, difficulties in conducting any survey may introduce nonsampling error. For example, because the data were self-reported difficulties in interpreting a particular question or differences in the way some regional offices are managed can introduce variability into the survey results. Additionally, because of size differences among the regional offices, we did not quantify or assign specific numbers to the scales used in the survey. However, we took steps in developing the questionnaire to minimize such nonsampling error. For example, we pretested the content and format of our survey for understandability. We then refined our survey as appropriate. An analyst entered the survey responses into a database and the accuracy of this data entry was verified by an independent analyst. Qualitative responses to open-ended questions on the survey were categorized by an analyst to identify common themes. These themes were then independently reviewed by another analyst for verification purposes. In addition to the contact named above, Melissa Emrey-Arras, Assistant Director; Amy Anderson, Analyst-in-Charge; Julie DeVault, Nora Boretti, and Brooke Leary made major contributions to this report; William Doherty, Peter DelToro, Cynthia Bascetta, Patricia Owens, Brett Fallavollita, and Randall Williamson provided guidance; Walter Vance assisted with design study; Cynthia Grant and Wayne Turowski conducted data analysis; Stan Stenersen, Kate van Gelder, Susan Bernstein, Julianne Hartman Cutts, and Brittni Milam helped write the report; Mimi Nguyen provided assistance with graphics; and Doreen Feldman and Roger Thomas provided legal advice. Multiple Agencies Provide Assistance to Service-disabled Entrepreneurs, but Specific Needs Are Difficult to Identify and Coordination Is Weak. GAO-09-11R. Washington, D.C.: October 15, 2008. Federal Disability Programs: More Strategic Coordination Could Help Overcome Challenges to Needed Transformation. GAO-08-635. Washington, D.C.: May 20, 2008. Disabled Veterans’ Employment: Additional Planning, Monitoring, and Data Collection Efforts Would Improve Assistance. GAO-07-1020. Washington, D.C.: September 12, 2007. Highlights of a GAO Forum: Modernizing Federal Disability Policy. GAO-07-934SP. Washington, D.C.: August 3, 2007. Federal Disability Assistance: Wide Array of Programs Needs to Be Examined in Light of 21st Century Challenges. GAO-05-626. Washington, D.C.: June 2, 2005. Vocational Rehabilitation: VA Has Opportunities to Improve Services, but Faces Significant Challenges. GAO-05-572T. Washington, D.C.: April 20, 2005. Vocational Rehabilitation: More VA and DOD Collaboration Needed to Expedite Services for Seriously Injured Servicemembers. GAO-05-167. Washington, D.C.: January 14, 2005. VA Vocational Rehabilitation and Employment Program: GAO Comments on Key Task Force Findings and Recommendations. GAO-04-853. Washington, D.C.: June 15, 2004. Human Capital: Key Principles for Effective Strategic Workforce Planning. GAO-04-39. Washington, D.C.: December 11, 2003. A Model of Strategic Human Capital Management. GAO-02-373SP. Washington, D.C.: March 15, 2002. Vocational Rehabilitation: VA Continues to Place Few Disabled Veterans in Jobs. GAO/HEHS-96-155. Washington, D.C.: September 3, 1996. Vocational Rehabilitation: Better VA Management Needed to Help Disabled Veterans Find Jobs. GAO/HRD-92-100. Washington, D.C.: September 4, 1992. VA Can Provide More Employment Assistance to Veterans Who Complete Its Vocational Rehabilitation Program. GAO/HRD-84-39. Washington, D.C.: May 23, 1984. | In 2004, the Veterans Affairs' Vocational Rehabilitation and Employment (VR&E) program was reviewed by a VR&E Task Force. It recommended numerous changes, in particular focusing on employment through a new Five-Track service delivery model and increasing program capacity. Since then, VR&E has worked to implement these recommendations. To help Congress understand whether VR&E is now better prepared to meet the needs of veterans with disabilities, GAO was asked to determine (1) how the implementation of the Five-Track Employment Process has affected VR&E's focus on employment, (2) the extent to which VR&E has taken steps to improve its capacity, and (3) how program outcomes are reported. GAO interviewed officials from VR&E, the 2004 Task Force, and veteran organizations; visited four VR&E offices; surveyed all VR&E officers; and analyzed agency data and reports. By launching the Five-Track Employment Process, VR&E has strengthened its focus on employment, but program incentives have not been updated to reflect this emphasis. VR&E has delineated its services into five tracks to accommodate the different needs of veterans, such as those who need immediate employment as opposed to those who need training to meet their career goal. However, program incentives remain directed toward education and training. Veterans who receive those services collect an allowance, but those who opt exclusively for employment services do not. While VR&E officials said they believed it would be helpful to better align incentives with the employment mission, they have not yet taken steps to address this issue. VR&E has improved its capacity to provide services by increasing its collaboration with other organizations and by hiring more staff, but it lacks a strategic approach to workforce planning. Although there have been staff increases, many of VR&E's regional offices still reported staff and skill shortages. The program is not addressing these workforce problems with strategic planning practices that GAO's prior work has identified as essential. For example, VR&E officials have not fully determined the correct number of staff and the skills they need to serve current and future veterans. VA does not adequately report program outcomes, which could limit understanding of the program's performance. Specifically, it reports one overall rehabilitation rate for veterans pursuing employment and those trying to live independently. Computing each group's success rate for fiscal year 2008, GAO found a lower rate of success for the majority seeking employment and a higher rate of success for the minority seeking independent living than the overall rate. GAO also found that VR&E changed the way it calculates the rehabilitation rate in fiscal year 2006, without acknowledgments in key agency reports. VA noted the change in its fiscal year 2006 performance report, but did not do so for its fiscal year 2007 and 2008 reports, or for its fiscal year 2008and 2009 budget submissions. Such omissions could lead to misinterpretation of program performance over time. |
The size and cost of operating the federal vehicle fleet has been a subject of concern for many years. In 1986, Congress enacted legislation that required agencies, among other things, to collect and analyze the costs of their motor vehicle operations, including acquisition decisions, in order to improve the management and efficiency of their fleets and to reduce costs. Two years later, we reported that most agencies had not conducted the required studies. In 1992, an interagency task force identified obstacles to cost-efficient fleet management, including the continued lack of compliance with the 1986 legislative requirements, and stated that agencies lacked basic information to effectively and efficiently manage their fleets. In 1994, we reported, among other things, that successful fleet practices included oversight at the headquarters level to ensure that uniform written policies and guidance are provided throughout the organization and fleet management information systems to provide accurate data about the fleet. We also reported that agencies need to conduct periodic reviews to ensure their fleets are the right size and composition. The vehicle fleets at the agencies we reviewed are widely dispersed. For example, the Army and Navy operate vehicles throughout the world, while the Veterans Affairs fleet is spread across medical centers, national cemeteries, and other locations throughout the country. The approximate number of vehicles operated by the agencies included in our review is shown in figure 1. The Office of Governmentwide Policy within GSA develops policies, disseminated through the Federal Management Regulation, and bulletins for agency vehicle fleet management. Federal agencies, however, are responsible for managing their own fleets, including making decisions about the number and type of vehicles they need and how to acquire them. OGP also collects data from agencies via the Federal Automotive Statistical Tool (FAST) concerning fleet size, composition, and costs. Although GSA uses these data in annual reports to OMB on the government’s fleet size and costs, GSA officials told us that much of the data are inaccurate because of the different systems agencies use to collect and report information. The agencies we reviewed cannot ensure that their vehicle fleets are the right size and composition to meet their missions because of a lack of attention to key fleet management practices. In particular, agencies generally have not established policies with clearly defined utilization criteria related to the mission of a vehicle to ensure that decisions to acquire and retain vehicles are based on a validated need. In addition, agencies have not implemented periodic assessments to determine whether they have the right number and type of vehicles in the fleet. Some agencies have begun to recognize the need to pay more attention to fleet management and are taking steps to review their guidelines in an effort to provide better criteria to determine vehicle needs and to manage their fleets more efficiently. Industry practice for cost-efficient fleets includes establishing policies and procedures that contain clearly defined utilization criteria related to the mission of a vehicle. These criteria are then used to conduct periodic assessments of the fleet to identify underutilized vehicles. As previously noted, our 1994 report highlighted the importance of these fleet management practices. However, as shown in figure 2, most of the agencies we reviewed do not have clearly defined criteria and have not conducted periodic fleet assessments. We did not include DHS in this chart because the agency is still developing most of its fleet management guidelines, policies, and vehicle utilization standards. The lack of appropriate utilization criteria means that local level officials—who usually make the decisions to acquire and retain vehicles— are not basing their decisions on a validated need. Some agencies establish the number of miles traveled, such as the 12,000 miles per year in GSA’s guidance, as a criterion to measure vehicle utilization. However, this criterion is not appropriate for the mission of some vehicles, such as those used for utility work, medical transportation, or security. Therefore, agency officials often ignore mileage standards. None of the agencies assigned a value to other criteria, such as number of trips per day or hours on station, to measure vehicle use when mileage is not an appropriate measure. Following are some examples of cases we found where the application of specific criteria related to the mission of a vehicle would give local fleet managers a more accurate basis on which to make decisions about fleet size: At one Veterans Affairs medical center, vehicles are used to transport veterans from their homes to outpatient rehabilitation activities in a metropolitan area outside of Boston. Veterans Affairs officials told us that using only a mileage standard to justify the need for the vehicles is inappropriate because they are used within a confined area. The officials agreed that a better measure would be the number of trips or the number of veterans served. The Department of Defense prescribes that the military services establish utilization measures, such as passengers carried or hours used, to measure the need for a vehicle when mileage is not appropriate. However, neither Army nor Navy guidelines incorporate these types of utilization criteria. Natural Resources Conservation Service policy includes only one criterion to establish fleet size, which is a ratio of employees to vehicles. The definition of employees includes full- and part-time employees and volunteers, regardless of roles or job description. Further, agencies generally do not conduct periodic assessments of their fleets. Decisions about whether to acquire and retain vehicles are made at the local level with little or no headquarters oversight. These local-level decisions are frequently based on the availability of funds rather than on a validated need. For example, directors of Veterans Affairs medical centers and state conservationists at the Natural Resources Conservation Service determine whether or not to acquire vehicles based on the availability of funds. The Army allows local commanders to acquire vehicles with available funds without further justification within established allocation levels. However, these levels have not been reviewed since 1991, 13 years ago. The Navy and the Forest Service conduct periodic assessments of fleet size, but the results of the assessments are either not enforced or not conducted in a systematic manner. The Navy’s Transportation Equipment Management Centers (TEMC) conduct utilization assessments to recommend fleet inventory levels for Navy commands, yet the commands are not required to implement the recommended inventory levels. The Forest Service’s guidelines contain instructions for a systematic review of vehicle utilization at local sites, but these reviews are not consistently performed at the locations we visited, and the local sites are not required to report the results of the reviews to agency headquarters. Some agencies have begun to focus more attention on fleet management practices that they believe will improve the efficiency of their fleets. At the start of fiscal year 2004, the Army and Navy reorganized to centralize the management of facilities and equipment, including vehicles that are not related to combat forces, at various commands and installations. The Navy established the Naval Installations Command and the Army established the Installation Management Agency for this purpose. Navy and Army officials told us that these organizations should result in increased attention to fleet management, including the enforcement of the TEMCs’s recommended inventory level in the Navy and the revision of outdated vehicle allocation levels in the Army. Officials told us that these organizations will provide more centralized oversight of the Army and Navy vehicle fleets, but individual commands will continue to determine the need for vehicles within the established inventory objectives or allocation levels. At the time of our review, it was too early to determine the impact these reorganizations will have on improving fleet management practices. In addition, some agencies are reviewing their guidelines in an attempt to include more specific requirements for fleet management. For example, Veterans Affairs officials told us that they are developing a vehicle manual with detailed guidance on how to measure utilization and hope to issue it in the fall of 2004. Department of Defense officials are in the process of revising the department’s guidelines and are considering requiring the application of utilization criteria tied to the mission of a vehicle to determine the need for vehicles. In early 2003, DHS established a Fleet Commodity Council to review strategic sourcing issues, including how the department can leverage its purchasing power when acquiring vehicles. The council, made up of agency fleet managers, meets quarterly. In addition, departmentwide fleet management policies and guidelines are being developed and will include criteria for justifying and assessing vehicle fleet sizes. Our work and reviews by inspectors general identified numerous instances where agencies had an excessive number of vehicles in their fleets. If these vehicles were disposed of, agencies could realize savings ranging from thousands to millions of dollars, as illustrated in the following examples: In February 2004, the Department of the Interior’s Inspector General reported that a significant portion of the department’s fleet of approximately 36,000 vehicles is underutilized and estimated savings of $34 million. At the end of fiscal year 2003, Navy reviews of selected activities estimated fleet savings of $3.7 million per year if installations reduced their fleets based on recommendations from these reviews. In 2003, a U.S. Army Audit Agency report identified one Army garrison that had retained 99 excess vehicles in its fleet. A 2001 Veterans Affairs’ Inspector General report noted that accountability over the department’s owned vehicles at a medical center could not be reasonably assured. For example, agency auditors found one vehicle that had been parked behind a laundry facility and had not been moved since it was purchased in 1997. The report described the acquisition of the vehicle as unjustified. Appendix VII contains additional examples of reports that highlight potential savings if unnecessary vehicles were eliminated from agencies’ fleets. In other cases, locations have reduced their fleet size—primarily because of pressure to cut their budgets—and consequently realized savings, as illustrated in the following examples: A Navy command decreased its fleet from 156 to 105 vehicles over the course of a year, resulting in savings of about $12,000 per month. A Navy official explained that the decrease in vehicles was driven by cuts in the command’s budget. A Veterans Affairs medical center, in an effort to find potential savings, reduced its fleet by 12 vehicles, with estimated savings of about $57,000 per year. In the 1990s, a Forest Service region eliminated 500 leased vehicles when the agency reduced its workforce due to budget reductions, according to a regional official. However, because these reductions were not based on the application of utilization criteria to identify vehicle needs, there is no guarantee that the fleets are the right size to meet the agencies’ missions. Industry practice for cost-efficient fleets also calls for an assessment of the type of vehicles being acquired. Savings can be realized by changing the composition of the fleet—buying vehicles that are less expensive and less costly to operate and maintain. We found cases where local level officials had taken this step. For example, in assessing the need for vehicles to expand community outreach services, program officials at a Veterans Affairs medical center are replacing 15 passenger vans with less expensive sedans and minivans that will still allow them to accomplish the program’s goals. In another case, a local Navy fleet manager was able to help a security organization reduce its fleet costs by using less expensive trucks for carrying dogs used by law enforcement officials. As a result of a review of governmentwide fleet practices, GSA’s Office of Governmentwide Policy (OGP) and OMB are taking actions to require agencies to better manage and improve the cost-efficiency of their fleets. In 2002, OGP initiated a review of federal agencies’ fleet management practices in cooperation with OMB. Twenty-one agencies responded to a GSA survey, which found, among other things, that the vast majority of agencies lack utilization criteria by which to determine vehicle needs and identify underutilized vehicles. The survey further found that many agencies have little control over fleet budgets and allocation levels for vehicles and lack effective fleet management information systems. Based on the survey results, OGP is currently revising the Federal Management Regulation to require agencies to improve fleet management practices by, among other things, (1) appointing a central fleet manager, (2) periodically reviewing fleet size, and (3) funding a fleet management information system. In 1994, we reported that the primary role of a central fleet manager is to establish and monitor written policies, collect and analyze fleet data, and look for opportunities to improve fleet operations. OGP officials believe that effective fleet management requires centralizing control at the headquarters level over all decisions related to fleet size. Thus, OGP will require agencies to appoint a senior management official with decision-making authority and control over all aspects of the agency’s fleet program, including the entire fleet budget and approval of local-level decisions. However, we anticipate strong opposition to this requirement, based on our discussions with agency officials outside of GSA. Many of the headquarters officials we interviewed believe that local-level fleet managers, given the right tools, are in the best position to make decisions on the need for vehicles and that centralized oversight, rather than control over the budgets and decision making, would be more appropriate. The revised regulation will also require agencies to develop criteria against which to evaluate the need for vehicles and to use these criteria in performing annual fleet assessments. OGP officials told us that the regulation will not include examples of the different criteria that could be used to determine vehicle needs. Instead, this type of information will be incorporated in GSA bulletins issued periodically to agencies and posted on the GSA Web site. Based on the results of the 2002 survey, OGP had planned to recommend that agencies base their decisions about the need for vehicles on a staff-to-vehicle ratio; however, officials told us they will require agencies to consider other measures more appropriate to a vehicle’s mission. As discussed above, industry practices include establishing multiple utilization criteria, such as mileage, number of trips per day and hours on station, because of the differing nature of agency missions. OGP further intends to require agencies to fund a fleet management information system that would allow them to accurately collect information on the cost to acquire, operate, and maintain their fleets. This initiative will allow agencies to better forecast fleet funding and make well-founded decisions about when to replace vehicles. OGP plans to issue guidelines defining the minimum functional requirements for the system. Officials we spoke with at Defense, DHS, and Veterans Affairs stated that they believe that developing a fleet management system is important, but they are at varying stages of exploring options, requesting bids from contractors, and requesting funding. While OGP believes it has the authority to require agencies to follow its regulation and guidelines, enforcement will be another matter. OGP officials plan to work with agencies in a cooperative effort, through workshops and federal fleet conferences, to help them implement the requirements in the upcoming regulation, which they expect to issue in October 2004. They are also considering issuing “report cards” on the progress agencies are making in implementing and following the revised regulation. OMB has also taken steps to hold agencies accountable for more effective fleet management practices. In 2002, OMB began requiring agencies, as part of their budget submission, to report the size, composition, and cost of their fleets for the current year and to project costs for the next 3 fiscal years. The narrative in the report must also detail the reasons for any significant changes in fleet size, discuss the methodology used to assign vehicles, and identify any impediments to managing the fleets. Recognizing the difficulties with collecting reliable data, GSA and OMB plan to work with agencies to improve their data collection and reporting. Officials believe that as agencies move to better fleet management information systems, the data will improve. Despite long-standing concerns over the size of the federal fleet, the agencies we reviewed still do not know if their fleets are the right size and composition. Until agencies develop and apply utilization criteria tied to the mission of the vehicles in their fleets, they will not know how many vehicles they need to meet their missions. Moreover, by not using such criteria to assess their fleets periodically, agencies are missing the potential opportunity to identify excess vehicles, reduce their fleets, and save money. While some agencies have started to take actions to improve fleet management, at this time it is unclear how successful these efforts will be in providing more efficient fleet management. Because of its role in providing fleet management policy, GSA’s Office of Governmentwide Policy is in a position to take the lead in assisting agencies to develop appropriate utilization criteria and to assess their fleet size and composition. That office, in conjunction with OMB, has taken steps to focus attention at a governmentwide level on the need to improve fleet management practices. However, the plan to require agencies to centralize budget control over their fleets is a contentious one, and it remains to be seen how agencies will respond once the draft regulation is issued. In the meantime, additional measures are needed to ensure that the federal government’s fleet does not contain excessive numbers of vehicles. To help agencies determine the appropriate size and composition of their fleets, we recommend that the Administrator of GSA direct the Office of Governmentwide Policy to include in the revised Federal Management Regulation the following two requirements for agencies develop utilization criteria related to the missions of the vehicles and conduct periodic assessments of the number and type of vehicles in their fleets using these criteria. To bring further attention to the potential budget impact of retaining excessive vehicles, we recommend that the Director of OMB require agencies, as part of the new reporting requirement in their budget submissions, to report on (1) the criteria they used to determine the need for vehicles and (2) the results of fleet assessments they have conducted. To ensure that agency fleets are the right size and composition to meet their missions, we recommend that the Secretaries of the Departments of Agriculture, Defense, Homeland Security, and Veterans Affairs take the following three actions establish guidance and policies that include clearly defined utilization criteria to be used in validating the need for vehicles based on their missions; require fleet managers to use these criteria in determining the need for vehicles and in conducting periodic fleet assessments; and establish effective oversight mechanisms to ensure that the utilization criteria are defined and fleet assessments are carried out. We received written comments on a draft of this report from GSA and the Departments of Agriculture, Defense, Homeland Security, and Veterans Affairs, and we received oral comments from OMB. All of the agencies generally concurred with our findings and recommendations. The written comments are reproduced in appendixes II through VI. GSA noted that the primary contributor to the lack of progress in fleet management improvement has been the absence of strong management support for fleet reform and the consequent lack of resources for acquiring management information systems. GSA observed, however, that many agencies are becoming more aware of these issues. GSA also noted that although our report discusses three revisions to the Federal Management Regulation that GSA is in the process of drafting, these three revisions are part of a comprehensive package of 10 recommendations for fleet management reform that came out of GSA’s Federal Fleet Review Initiative. We focused our review on the key revisions directly related to the justification for acquiring and retaining vehicles. GSA also stated that, while it agrees that local managers are best qualified to know their requirements, only a central manager can provide the consistent oversight, policy, and budget review that has been lacking in many agencies, and it is this deficiency GSA seeks to address by its requirement that each agency appoint a senior management official with decision-making authority and control over all aspects of the agency’s fleet program, including the fleet budget. As we note in our report, during the course of our audit work, it was clear that the agency officials we spoke with were opposed to GSA’s position on this matter. We did not assess the ramifications of GSA’s proposal as part of our review. In addition, GSA expressed disappointment that we did not recommend that agencies fund a fleet management information system. Because we found that agencies are in different stages of implementing such systems, and because GSA already plans to require such systems in its revised Fleet Management Regulation, we did not believe it was necessary for us to recommend this action. The Departments of Agriculture and Veterans Affairs agreed with our recommendations but raised concerns about GSA’s planned revision to the Federal Management Regulation that would require agencies to centralize budget authority for fleet management. Veterans Affairs strongly opposes such a requirement. It noted that, in a system as large and complex as the department’s, such a massive administrative responsibility would be unwieldy and inefficient and would require significant additional resource support. The department believes that oversight at the local level is the preferred approach to fleet management. Agriculture noted that the budget is a complex process involving detailed review and comparison of vehicle costs. It stated that changing priorities, such as national emergencies, require intense local management of the fleet to ensure a high state of mission-readiness and that, therefore, increased centralization of the budget process would not be in the best interest of overall fleet efficiency and mission success. As we point out in our report, the issue of centralized budget authority is a contentious one. It will need to be addressed by the agencies, OMB, and GSA. Agriculture also expressed concern that our recommendation on the need to establish utilization criteria would lead to a set of national criteria that all local fleet managers would be required to use. That is not the intent of our recommendation. Our recommendation is aimed at having each agency establish utilization criteria based on the specific mission of the vehicles in its fleet. Where a single criterion such as mileage, for example, is inappropriate, local officials need to have alternative criteria available, such as hours on station or number of clients served, to validate the need for vehicles. We believe it is the responsibility of agencies to establish clearly defined utilization criteria and guidelines to allow local officials to appropriately apply these criteria. The Department of Homeland Security (DHS) agreed with our recommendations and emphasized that it has undertaken efforts, in a relatively short time frame, to establish a departmentwide fleet management program. It noted that the process used by its Bureau of Customs and Border Protection for assessing vehicle utilization based on a variety of factors is considered a best practice and will be extended to the rest of the department. In addition, DHS stated that an updated management directive on motor vehicle management sets forth the requirement for maintaining systems for effective control and accountability of motor vehicle assets and for maintaining the minimum number of vehicles needed to meet requirements. The directive is currently being reviewed within the department. In DHS’s view, these two actions meet the requirement to establish effective oversight mechanisms to ensure that fleet utilization criteria are defined and fleet assessments are carried out and reviewed on a regular basis. While these are positive actions, DHS needs to ensure that oversight is maintained and that periodic fleet assessments are conducted using the appropriate criteria. Veterans Affairs stated that it will address our recommendations with several planned initiatives which, when completed, should rectify identified weaknesses. For example, the department will convene a national work group to develop a broad-based fleet management operations manual that will include a section that defines utilization criteria based on vehicle missions. The department is also reviewing various options for establishing a systemwide software application to be used as an oversight tool for managing the fleet. The Department of Defense agreed with our recommendations. It stated that action will be taken to ensure that utilization criteria, which may be comprised of existing mileage goals or other appropriate criteria, will apply to all nontactical vehicles. It will also require components to review their vehicle inventories annually against fleet assessments and to conduct on-site surveys or inspections on a minimum 3-year cycle (resources permitting) with the purpose of purging or fully justifying underutilized vehicles. In oral comments, OMB representatives told us that they agree with our findings and recommendations and will consider incorporating the recommended changes to agencies’ reporting requirements in new guidance for the fiscal year 2006 budget cycle. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to other interested congressional committees; the Administrator of GSA; the Director of OMB; and the Secretaries of Defense, Army, Navy, Agriculture, Veterans Affairs, and Homeland Security. We will make copies of this report available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at 202-512-4841 or cooperd@gao.gov, or Michele Mackin, Assistant Director at 202-512-4309 or mackinm@gao.gov. Major contributors to this report include Marie Ahearn, Benjamin Howe, Emma Quach, Richard Silveira, and Tatiana Winger. To determine the extent to which agencies can ensure that their fleets are the right size, we obtained and analyzed agency policies and guidelines on fleet management from the Departments of Agriculture, Army, Navy, Defense, Homeland Security, and Veterans Affairs. These agencies, according to GSA data, have some of the largest fleets in the government. Because the Department of Homeland Security was only recently formed, its organizational elements continue to operate their vehicle fleets under the policies of their legacy agencies. Therefore, we limited our review to the department’s efforts to leverage its buying power through a strategic sourcing initiative for vehicles and to the steps it is taking to establish departmentwide guidelines on fleet management. Although the Department of the Interior also has a large fleet, we did not include it in our review because the Inspector General recently issued a report on that department’s vehicle fleet. We did not assess agencies’ policies on vehicle operation, maintenance or disposal. To illustrate how local, state and regional officials determine the need for vehicles, we selected local, state and regional offices based on location and number of vehicles within each agency. We obtained and analyzed information and interviewed fleet managers and other officials responsible for fleet management at these locations to identify the controls, oversight, and criteria used to determine the need for vehicles. Following are the locations we contacted or where we conducted our work. Washington, D.C. Wildlife Service, Athens, Ga. Wildlife Service, Wash. Veterinary Service, Iowa Veterinary Services, Conyers, Ga. Veterinary Service, Eastern Regional Office, Raleigh, N.C. Washington, D.C. Southern Region, Atlanta, Ga. Chattahoochee-Oconee National Forest, Gainesville, Ga. Daniel Boone National Forest, Ky. Land Between the Lakes National Recreational Area, Ky. Pacific Northwest Region, Oreg. Siuslaw and Willamette National Forests, Oreg. Office of Asset Management, Washington, D.C. Federal Law Enforcement Training Center, Glynco, Ga. Customs and Border Protection, Washington, D.C. Transportation Security Administration, Arlington, Va. Office of the Assistant Deputy Under Secretary of Defense (Transportation Policy), Washington, D.C. Headquarters, Department of the Army, Office of the Assistant Chief of Staff for Installation Management, Washington, D.C. Fort Belvoir, Va. United States Military Academy, West Point, N.Y. Fort Carson, Colo. Naval Facilities Engineering Command, Washington Navy Yard, D.C. Navy Public Work Center, Washington, D.C. Navy Public Works Center, Norfolk, Va. Naval Air Station, Joint Reserve Base, Fort Worth, Tex. Navy Public Works Center, Jacksonville, Fla. Naval Station Newport, Newport, R.I. Pacific Division, Naval Facilities Engineering Command, Transportation Equipment Management Center, Pearl Harbor, Hawaii Atlantic Division, Naval Facilities Engineering Command, Transportation Equipment Management Center, Norfolk, Va. Headquarters, Washington, D.C. Medical Center, Bedford, Mass. Medical Center, Baltimore, Md. Medical Center, Jamaica Plain, Boston, Mass. Medical Center, Brockton, Mass. We reviewed prior GAO and other audit agency reports, reviewed other public documents, and contacted the following offices of inspectors general Department of Energy, Department of Defense, Department of Veterans Affairs, Department of Justice, Department of Treasury, Department of Transportation, Department of Homeland Security, Department of the Interior, and Department of Agriculture. We also contacted officials from the Naval Audit Service and the Army Audit Agency. To identify industry standards for efficient fleet management, we discussed the fleet management practices contained in our 1994 report and the use of utilization criteria with three industry fleet management consultants, one of whom was a contributor to our 1994 report. We selected these consultants based on their experience dealing with the fleet management practices in both the public and private sectors. We also talked with the manager of the Fleet Information Resource Center of the National Association of Fleet Administrators. To identify governmentwide steps to improve fleet management, we collected, analyzed, and discussed information obtained from officials at the Office of Management and Budget’s Office of Transportation/GSA Branch, GSA’s Office of Governmentwide Policy, and GSA’s Office of Vehicle Acquisition and Leasing Services, which runs the leasing program. We also discussed with GSA officials the Office of Governmentwide Policy’s proposed revisions to the regulation on fleet management. We conducted our review from September 2003 to April 2004 in accordance with generally accepted government auditing standards. U.S. Army Garrison Japan does not effectively use its nontactical fleet. Utilization data were only available for 430 of the 633 vehicles at the Garrison, and 235 of these vehicles had low utilization. The reviewers identified about 99 excess vehicles, representing about 16 percent of the fleet. Report did not estimate potential savings; however, it noted that for the 99 excess vehicles, the estimated replacement cost was about $3.8 million and maintenance cost was about $42,000. Transportation Motor Pool Operations, 8th U.S. Army, December 1997. No estimate on potential savings. 34 vehicles (representing 33 percent of 61 vehicles (representing 39 percent of the fleet), and 203 vehicles (representing 54 percent of the fleet) Activities did not always effectively use their nontactical support vehicles. Vehicle usage goals set by the command were considerably below Department of the Army goals. About $109,600 if activities met command’s usage goals; $465,100 if they met the Army’s goals. Navy Transportation Equipment Management Center (TEMC), Atlantic Division Selected Navy Transportation Equipment Management Center reviews. At the end of fiscal year 2003, Navy reviews of selected activities estimated cost avoidance of $3.7 million per year if installations reduced their fleets by a total of 775 vehicles to meet the recommended inventory level. $3.7 million per year cost avoidance. Management of Non-tactical (Administrative) Transportation Vehicles, March 1998. Auditors found that 6,605 of the 24,387 vehicles in the review were not needed. The Navy did not have a systematic mechanism within the transportation management structure to enforce Navy policy on fleet management. $19.8 million annually. Government Vehicle Usage at Naval Air Station Patuxent River, Md., December 1998. The Air Station retained 79 assigned vehicles that were not needed to support mission requirements because the Public Works Transportation Department did not have a systematic and continuous process for the review and evaluation of vehicle assignments. In addition, 141 of the 359 vehicle assignments were without required justification. Report did not specify amount, but noted that the Naval Air Station had unnecessary administrative transportation costs as a result of excess vehicles. Department of Veterans Affairs, Office of Inspector General Review of Selected Construction Contracts, Purchase Card Activities, and Vehicle Administration at Veteran’s Affairs Medical Center (VAMC), Clarksburg, West Virginia, January 2001. Auditors could not account for all vehicles at the facility. Poor supervision contributed to a lack of accountability and records were incomplete and inaccurate. Poor business decisions were made during the trade and acquisition of vehicles. In one example, an acquisition was not justified because the vehicle had been parked behind a laundry facility and not moved since it was purchased in 1997. In fact, the keys were missing at the time of the review. Not addressed as a whole. The purchase price of the one vehicle that did not move was $1,800. U.S. Department of Energy (DOE), Office of Inspector General, Office of Audit Services The size of the fleet was not appropriate because Richland had not established or implemented controls required by DOE’s Property Management Regulation. The review found that 85 percent of 1,332 vehicles were used less than DOE’s mileage standards, and Richland could potentially reduce its fleet by 559 vehicles. $1.7 million annually. The allotment of 516 on-site discretionary vehicles was too large because the vehicles were measured in mileage instead of number of trips, which was the standard for this laboratory. None of the 31 randomly selected on-site discretionary vehicles met the standard of 9.2 trips per day. Livermore would need to reduce its fleet by 363 vehicles to meet the established usage standard. $690,000 annually. Vehicle Fleet Management at the Idaho National Engineering and Environmental Laboratory, March 1999. The light vehicle fleet was larger than necessary. The review found that 45 percent of the light vehicles were used significantly less than the mileage standards and that Idaho could potentially reduce its fleet by 86 vehicles. $321,000 annually in operation, maintenance and replacement costs. The department and its bureaus were not effectively managing its approximately 36,000-vehicle fleet. A significant portion of the department’s fleet was underutilized (44 percent). $34 million annually. Selected Administrative Activities at the Colorado State Office, Bureau of Land Management, March 1996. The state office did not complete its required annual review and was not managing its vehicle fleet efficiently. The review found that 20 of the 60 owned or leased vehicles were underutilized and recommended a fleet reduction of up to 6 GSA vehicles. $22,000 annually for the 6 returned GSA vehicles. | Federal agencies spend about $1.7 billion annually to operate a fleet of about 387,000 vehicles. During the last decade, concerns have been raised about whether agencies have more vehicles than they need. In an April 2002 letter to federal agencies, the Office of Management and Budget stated that the size of the federal fleet seemed excessive. GAO was asked to determine (1) the extent to which agencies ensure that their fleets are the right size to meet agency missions, (2) whether potential savings could result from the disposal of unneeded vehicles, and (3) what actions are being taken on a governmentwide basis to improve fleet management practices. GAO focused its review on the justification for acquiring and retaining vehicles at the Departments of Agriculture, Army, Homeland Security, Navy, and Veterans Affairs. Because of a lack of attention to key vehicle fleet management practices, the agencies GAO reviewed cannot ensure their fleets are the right size or composition to meet their missions. Industry practices for cost-efficient fleets include the development of utilization criteria related to the mission of a vehicle and periodic fleet assessments using these criteria to determine the appropriate fleet size and composition. If unneeded vehicles are identified, they are disposed of. However, the agencies GAO reviewed have not established policies that contain clearly defined utilization criteria that would allow them to determine the number and type of vehicles they need. Further, agencies are not routinely conducting periodic fleet assessments. Two agencies, the Navy and the Forest Service within the Department of Agriculture, conduct assessments; however, these assessments are either inconsistently applied or the results are not enforced. Some agencies have begun to recognize the need to revise their guidelines to provide better criteria for determining their vehicle needs. GAO's work and reviews by inspectors general identified numerous instances where agencies were retaining vehicles they did not need, with potential savings ranging from thousands to millions of dollars if these vehicles were eliminated. For example, the Department of the Interior's Inspector General reported that a significant portion of the department's 36,000 vehicles were underutilized and estimated savings of $34 million annually if these vehicles were disposed of. GSA's Office of Governmentwide Policy and the Office of Management and Budget have recently taken a number of actions to require agencies to better manage and improve the cost-efficiency of their fleets. The Office of Governmentwide Policy is currently revising the Federal Management Regulation to require agencies to (1) appoint a central fleet manager with control over all aspects of fleet management, including fleet budgets, which are now generally controlled at the local level; (2) establish utilization criteria and periodically review fleet size; and (3) fund a fleet management information system. The Office of Governmentwide Policy plans to work in a cooperative effort with agencies to implement the revised regulation. However, based on discussions with officials from the agencies GAO reviewed, GAO anticipates that GSA will face opposition to its requirement for centralized budget control over the fleets. In 2002, the Office of Management and Budget began requiring agencies to report, as part of their budget submissions, the size, composition, and cost of their fleets for the current year and to project costs for the next 3 fiscal years. |
Any discussion about the role of the federal government, about the design and performance of federal activities, and about the near-term federal fiscal outlook takes place in the context of two dominating facts: a demographic tidal wave is on the horizon, and it combined with rising health care costs threatens to overwhelm the nation’s fiscal future. The aging of baby boomers—and increased life expectancy—is a major driver of spending for Social Security, Medicare, and Medicaid. Absent structural reforms in these programs, budgetary flexibility will continue to shrink and eventually disappear. Our long-range budget simulations make it clear that the status quo is not sustainable. The numbers just do not add up. The fiscal gap is too great for any realistic expectation that the country can grow its way out of the problem. The failure to reexamine the retirement and health care programs driving the long term will put the nation on an unsustainable fiscal course, absent major changes in tax and/or spending policies. In addition, the failure to reprioritize other claims in the budget will make it increasingly difficult to finance the rest of government, let alone respond to compelling new priorities and needs. As figure 1 below shows, overall budgetary flexibility has been shrinking for some time. In the last 2 decades, mandatory spending—excluding net interest—has jumped by nearly 10 percentage points to consume more than half of the federal budget. *OMB current services estimate. Our long-term budget simulations continue to show that to move into the future with no changes in retirement and health programs is to envision a very different role for the federal government—a government that does little more than mail checks to the elderly and pay interest on the debt. Figure 2 below shows the picture if the tax reductions enacted last year are not permitted to sunset and discretionary spending keeps pace with the economy. By midcentury federal revenues may only be adequate to pay Social Security and interest on the federal debt. (See fig. 2.) Importantly, we would still have a significant long-range fiscal gap even if the tax reductions do sunset as provided for under current law, although the gap would be smaller. While the long-term picture has not been pretty for a number of years, it is worsening and the long-term crunch is getting closer. Further, the shift from surplus to deficit means the nation will move into the future in a weaker fiscal position than was previously the case. Metrics and mechanisms need to be developed to facilitate consideration of the long-term implications of existing and proposed policies or programs. We are currently doing work on how to describe the range and measurement of fiscal exposures—from explicit liabilities such as environmental cleanup requirements and federal pensions to the more implicit obligations presented by life-cycle costs of capital acquisition or disaster assistance. Although they dwarf all other programs in long-term trends, Social Security, Medicare, and Medicaid are not the only programs in the budget where looking beyond the 10-year budget window presents a very different cost picture. For example, federal insurance may appear costless in its first year, but when an insured event occurs, the budgetary impact can be significant. Social Security and health programs dominate our fiscal future but they are not the only reason to examine what government does and how it does it. Difficult as it may seem to deal with the long-term challenges presented by known demographic trends, policymakers must not only address these entitlement programs but also reexamine other budgetary priorities in light of the changing needs of this nation in the 21st century. Given the size of the long-term gap it will be necessary to work on several fronts at once. There is also a need to reexamine existing programs, policies, and activities. It is all too easy to accept “the base” as given and to subject only new proposals to scrutiny and analysis. As we have discussed previously, many federal programs, policies, and activities—their goals, their structures, and their processes—were designed decades ago to respond to earlier challenges. In previous testimony, I noted that the norm should be to reconsider the relevance or “fit” of any federal program, policy, or activity in today’s world and for the future. Such a review might identify programs that have proven to be outdated or persistently ineffective, or alternatively could prompt appropriate updating and modernizing activities through such actions as improving program targeting and efficiency, consolidation, or reengineering of processes and operations. This includes looking at a program’s relationship to other programs. Budgeting has been the primary process used to resolve the large number of often-conflicting objectives that citizens seek to achieve through government action. It provides an annual forum for a debate about competing claims and new priorities. However, such a debate will be needlessly constrained if only new proposals and activities are on the table. A fundamental review of existing programs, policies, and operations can create much-needed fiscal flexibility to address emerging needs by ferreting out programs that have proven to be outdated, poorly targeted, inefficient in their design and management, or superceded by other programs. It is always easier to subject proposals for new activities or programs to greater scrutiny than existing ones. It is easy to treat existing activities as “given” and force new proposals to compete only with each other. Such an approach would move the nation further from, rather than nearer to, budgetary surpluses. In looking forward it is important to reflect on how much things have changed. We have a fiduciary and stewardship responsibility to today’s and tomorrow’s taxpayers to do so. For perspective, students who started college this past fall were 9-years old when the Soviet Union broke apart and have no memory of the Cold War; they have always known microcomputers and AIDS. We must strive to maintain a government that is effective and relevant to a changing society—a government that is as free as possible of outmoded commitments and operations that can inappropriately encumber the future. Debate about what government should do in the 21st century and how it should do business is fundamental to achieving this objective. In rethinking federal missions and strategies, it is important to examine not just spending programs alone but the wide range of other tools the federal government uses to address national objectives. These tools include direct loans and loan guarantees, tax preferences (shown in the budget as tax expenditures), and regulations. Sometimes these tools work at cross- purposes. The outcomes achieved by these various tools are in a very real sense highly interdependent and are predicated on the response by a wide range of other actors—including other levels of government and private employers whose involvement has become more critical to the implementation and achievement of federal policy objectives. These tools differ in transparency—spending programs are more visible than tax preferences. The choice and design of these tools are critical in determining whether and how these third parties will address federal objectives. Any review of the base of existing policy should address this broader picture of federal involvement. For example, in fiscal year 2000, the federal health care and Medicare budget functions included $37 billion in discretionary budget authority, $319 billion in entitlement outlays, $5 million in loan guarantees, and $91 billion in tax expenditures. (See fig. 3.) Good information—which is more than just budget numbers—helps to inform debate. This information, however, should be understandable not only by government officials but also by the public. Homeland security is a good example of both the need for public education and the challenges presented by changing priorities. Zero security risk is not an attainable goal; proposals to reduce risk must be evaluated on numerous dimensions—their dollar cost and their impact on other goals and values. Decisions on the level of resources, the allocation of those resources, and on how to balance security against other societal goals and values are necessary. However, absent public information in understandable form, related decisions may not be accepted. There will always be disagreements on these issues, but public education and reliable information move the debate to a more informed plane. Before the events of last September no one could have reasonably anticipated the array of new and challenging demands on federal programs and claims on future budgets for homeland security concerns. These compelling new budgetary claims illustrate the necessity of periodically reexamining the base through a disciplined, performance-based process. As you debate resources for homeland security—both how much and how to allocate them—you will be making risk assessments; the initiatives funded should be designed to achieve the most effective protection at a reasonable and affordable cost. As you consider the portfolio of homeland security programs for the future, the homeland security challenge may also provide a window of opportunity to rethink approaches to long-standing problems and concerns. For example, we have previously noted the poor coordination and inefficient use of resources that occur as a result of overlapping and duplicative food safety programs, but it is the potential threat from bioterrorism that gives new meaning and urgency to this issue and the interrelationship of related federal programs. Finally, the challenges of financing the new homeland security needs may provide the necessary impetus for a healthy reprioritization of federal programs and goals. The current crisis might, for instance, warrant reconsideration of the federal role in assisting state and local law enforcement. Given the challenges associated with fighting terrorism, is it still appropriate to involve the federal government in what have traditionally been state and local law enforcement responsibilities? While this kind of oversight and reexamination is never easy, it is facilitated by the availability of credible performance information focusing on the outcomes achieved with budgetary resources. Performance-based budgeting can help enhance the government’s capacity to assess competing claims in the budget by arming budgetary decision makers with better information on the results of both individual programs as well as entire portfolios of tools and programs addressing common performance outcomes. Although not the answer to vexing resource trade-offs involving political choice, performance budgeting does promise to modify and inform the agenda of questions by shifting the focus of debates from inputs to outcomes and results. Over the last decade, the Congress enacted a statutory framework to improve the performance and accountability of the executive branch and to enhance both executive branch and congressional decision making. Through continued attention by the Congress and the executive branch, some of the intended benefits of this framework are now beginning to emerge. GPRA expanded the supply of results-oriented performance information generated by federal agencies. In the 10 years since GRPA was enacted, agencies have improved the focus of their planning and the quality of their performance information. However, developing credible information on outcomes achieved through federal programs remains a work in progress, as agencies struggle, for example, to define their contribution to outcomes, which in many cases are influenced only partially by federal funds. Linking performance to budgeting raises the stakes associated with the measures and performance goals developed by agencies. For performance data to more fully inform resource allocations, decision makers must feel comfortable with the appropriateness and accuracy of the outcome information and measures presented—i.e., that they are comprehensive and valid indicators of a program’s outcomes. Otherwise, decisions might be guided by misleading or incomplete information, which ultimately will discourage the use of this information in resource allocations. GPRA was premised on a cycle where measures and goals were established and validated during a developmental period before they were subjected to the crucible of the budget process. In working to strengthen the linkages between resources and results, efforts across the federal establishment must be redoubled to ensure that the measures used are grounded in a firm analytic and empirical base. A way should be found to provide independent assurance about both the choice of measures and the quality of the data used. In attempting to link resources to results, it also will be important to measure the full costs of the resources associated with performance goals using a consistent definition of costs between and among programs. In looking ahead, the integration of reliable cost accounting data into budget debates needs to become a key part of the performance budgeting agenda. Although clearly much more remains to be done, together, the GPRA and Chief Financial Officers (CFO) Act initiatives have laid the foundation for performance budgeting by establishing infrastructures in the agencies to improve the supply of information on performance and costs. Sustained leadership attention will be required to build on this foundation. In addition, however, improving the supply of information is in and of itself insufficient to sustain performance management and achieve real improvements in management and program results. Rather, the improved supply needs to be accompanied by a demand for that information by decision makers and managers alike. Integrating management issues with budgeting is absolutely critical for progress in government performance and management. Recent history tells us that management reforms of the past—Planning-Programming- Budgeting-System, Management by Objectives, and Zero-Base-Budgeting— failed partly because they did not prove to be relevant to budget decision makers in the executive branch or the Congress. Such integration is obviously important to ensuring that management initiatives obtain the resource commitments and sustained commitment by agencies needed to be successful. Moreover, the budget process is the only annual process in the federal government where programs and activities come up for regular review and reexamination. Thus there is a compelling need to ensure that trade-offs are informed by reliable information on results and costs. Ultimately, performance budgeting seeks to improve decision making by increasing the understanding of the links between requested resources and expected performance outcomes. Although performance budgeting can reasonably be expected to change the nature of resource debates, it is equally important to understand what it cannot do. Previous management reforms have been doomed by inflated and unrealistic expectations, so it is useful to be clear about current goals. Performance budgeting can help shift the focus of budgetary debates and oversight activities by changing the agenda of questions asked in these processes. Performance information can help policymakers address a number of questions such as whether programs are: contributing to their stated goals, well-coordinated with related initiatives at the federal level or elsewhere, and targeted to those most in need of services or benefits. It can also provide information on what outcomes are being achieved, whether resource investments have benefits that exceed their costs, and whether program managers have the requisite capacities to achieve promised results. However, performance budgeting should not be expected to provide the answers to resource allocation questions in some automatic or formula- driven process. Since budgeting is the allocation of resources, it involves setting priorities—making choices among competing claims. In its broadest sense the budget debate is the place where competing claims and claimants come together to decide how much of the government’s scarce resources will be allocated across many compelling national purposes. Performance information is an important factor—but only one factor and it cannot substitute for difficult political choices. There will always be a debate about the appropriate role for the federal government and the need for various federal programs and policies—and performance information cannot settle that debate. It can, however, help move the debate to a more informed plane—one in which the focus is on competing claims and priorities. In fact, it raises the stakes by shifting the focus to what really matters—lives saved, children fed, successful transitions to self- sufficiency, individuals lifted out of poverty. In this context, performance questions do not have a single budgetary answer. Performance problems may well prompt budget cuts or program eliminations, but they may also inspire enhanced investments and reforms in program design and management if the program is deemed to be of sufficiently high priority to the nation. Conversely, even a program that is found to be exceeding its performance expectations can be a candidate for budgetary cuts if it is a lower priority than other competing claims in the process. The determination of priorities is a function of competing values and interests that may be informed by performance information but also reflects such factors as equity, unmet needs, and the appropriate role of the federal government in addressing these needs. How would “success” in performance budgeting be defined? Simply increasing the supply of performance information is not enough. If the information is not used—i.e., if there is insufficient demand—the quality of the information will deteriorate and the process either will become rote or will wither away. However, for the reasons noted, the success of performance budgeting cannot be measured merely by the number of programs “killed” or a measurement of funding changes against performance “grades.” Rather, success must be measured in terms of the quality of the discussion, the transparency of the information, the meaningfulness of that information to key stakeholders, and how it is used in the decision-making process. If members of the Congress and the executive branch have better information about the link between resources and results, they can make the trade-offs and choices cognizant of the many and often competing claims on the federal fisc. While budget reviews have always involved discussions of program performance, such discussions have not always been conducted in a common language or with transparency. This year, however, OMB has introduced a formal assessment tool into the deliberations. The PART—the Program Assessment Rating Tool—is the central element in the performance budgeting piece of the President’s Management Agenda. The PART will be applied during the fiscal year 2004 budget cycle to “programs” selected by OMB with input from and discussion with agencies. The PART includes general questions in each of four broad topics to which all programs are subjected: (1) program purpose and design, (2) strategic planning, (3) program management, and (4) program results (i.e., whether a program is meeting its long-term and annual goals). In addition to the general questions that apply to all, programs are subjected to more specific questions depending on which of seven mechanisms or approaches are used for delivery. OMB arrives at a profile for each program by reviewing information from budget submissions, agency strategic and annual performance plans, program evaluations, and other sources. OMB also makes an overall assessment whether the program is “effective” or “ineffective.” While the PART’s program-by-program approach fits with OMB’s agency- by-agency budget reviews, it is not well-suited to addressing cross-cutting issues or to looking at broad program areas in which several programs address a common goal. Although the evaluation of programs in isolation may be revealing, it is often critical to understand how each program fits with a broader portfolio of tools and strategies to accomplish federal missions and performance goals. Such an analysis is necessary to capture whether a program complements and supports other related programs, whether it is duplicative and redundant, or whether it actually works at cross-purposes with other initiatives. In such areas as low-income housing or health care, the outcomes achieved by federal policy are the result of the interplay of a complex array of tools including those on the spending side of the budget as well as the tax code and regulations. The PART does promise to build on GPRA by using the performance information generated through the planning and reporting process to more directly feed into budgetary decisions. Potentially, the PART can complement GPRA’s focus on increasing the supply of credible performance information by promoting the demand for this information in the budget formulation process. The recognition of the different types of performance issues associated with different governmental tools is important and reflects the key role that tools play in shaping accountability and results. As with performance budgeting in general, no assessment tool can magically resolve debates or answer questions. Rather, it is likely to be a useful screen to help identify programs for further evaluation. Its greatest contribution may turn out to be its use to focus discussions between OMB and the agencies about a given agency’s progress towards planned performance; about what progress has been made toward achieving specific goals and objectives of a given program or programs; and about what tools and strategies might be used to bring about improvements. Where the information provided is adequate, it has the potential to inform budget decisions with respect to particular programs. It is possible that a program may be a candidate for cuts or elimination—or for increases. However, these overall judgments will not define the process. For example, the PART section on program management may illuminate ways in which program operations could be improved. And the section on program design may identify design changes that could increase effectiveness, such as better targeting of existing funds. Using PART is likely to prompt a more robust discussion on program priorities and achievements between OMB, the agencies, and potentially with the Congress. The PART also may increase the attention paid to evaluation and performance information among federal agencies and third parties involved with implementing federal initiatives. As the information improves, it may become more useful to the Congress, especially to budget, appropriations, and authorizing committees. To the extent that the assessment is an important factor in resource allocations, agencies are likely to increase the attention given to evaluation and the gathering and reporting of performance information. The fact that a program’s PART score suffers from the absence of information may provide added impetus for agencies to enhance their evaluation and information-gathering capabilities. As with other management reforms, it will be important that initiatives such as PART be sustained over time if they are to be taken seriously by both agencies and the Congress. At the same time, the PART contains inherent limitations. These will not be in-depth evaluations, and evidence suggests that information for many programs will be incomplete. While no assessment tool can provide definitive answers to the question “should we continue this activity,” at the initial stage PART is likely to raise questions—that is, point to the need for further inquiry and analysis—rather than provide definitive answers. The profiles of a program across each section of the instrument are likely to be more informative than the total scores across the entire instrument. Caution should be taken in relying on “bottom line” judgments or ratings for programs with multiple performance goals and mixed performance records. Further, the achievement of federal/national policy goals often depends on the actions not only of the federal government but also of other levels of government and/or nongovernmental actors. GPRA required the President to prepare and submit to the Congress a governmentwide performance plan to highlight broader cross-cutting missions. Unfortunately, this was not done in the President’s fiscal year 2003 budget; we hope that the President’s upcoming fiscal year 2004 budget does include such a plan. Over time the usefulness of PART will depend on what follows the initial screens: how the results are pursued; whether the scope is broadened to cover more tools; whether a cross-cutting approach is employed; and improvements in evaluative, performance, and cost information on key programs. Ultimately, success will be measured by how the results of the more extensive analyses affect the resource allocation process and budget decisions over time. The basis for the effective application of the rating tool is the foundation of performance and evaluation information on federal programs. The gaps and weaknesses identified by the PART review exercise may help pinpoint aspects of the federal evaluation infrastructure that need to be strengthened. By highlighting available information on program performance, OMB’s rating tool should promote discussions of both what is known and what is not known about a program’s performance. Under GPRA, agencies expanded their store of data on program achievements and associated benefits for the American people. While this is necessary, it is not sufficient to answer all key questions about program effectiveness. Many programs are designed to be one part of a broader effort, working alongside other federal, state, local, nonprofit, and private initiatives to promote particular outcomes. Although information on the outcomes associated with a particular program may be collected, it is often difficult to isolate a particular program’s contribution to those outcomes. Moreover, some desired outcomes take years to achieve; tracking progress on an annual basis may be difficult. Additionally, where federal program responsibility has devolved to the states, federal agencies’ ability to influence program outcomes diminishes. At the same time, dependence on states and others for data with which to evaluate programs grows. The PART may be used to facilitate this kind of cross-cutting perspective. After programs have been filtered through the PART process, programs could be grouped into related categories for further evaluation in a more holistic fashion. Further understanding of these performance issues requires an in-depth evaluation of the factors contributing to the program results. Targeted evaluation studies can also be specifically designed to detect important program side effects or to assess the comparative advantages of current programs to alternative strategies for achieving a program’s goals. Unfortunately, there is reason to be concerned about the capacity of federal agencies to produce evaluations of their programs’ effectiveness. Many program evaluation offices are small, have other responsibilities, and produce only a few effectiveness studies annually. Even where the value of evaluations is recognized, they may not be considered a funding priority. Agencies struggled in the first years of performance reporting to provide measures of the outcomes of their program activities. Many have failed to address known weaknesses in the quality of their performance data. Our work has shown that systematic program evaluations—and units responsible for producing them—have been concentrated in a few agencies. Although many federal programs attempt to influence complex systems or events outside the immediate control of government, few studies deployed the rigorous research methods required to attribute changes in underlying outcomes to program activities. Increased evaluation capacity may require more resources, but over the longer term, failing to discover and correct performance problems can be much more costly. Therefore, the question of investment in improved evaluation capacity is one that must be considered in budget deliberations both within the executive branch and in the Congress. More broadly, Mr. Chairman and Madam Chair, such investments need to be viewed as part of a broader initiative to improve the accountability and management capacity of federal agencies and programs. The federal government needs to undergo a transformation to meet the performance expectations of the American public. Such an effort requires fundamental shifts in current human capital policies, organizational structures, governmental tools, and performance and financial accountability approaches. Fifty years of past efforts to link resources with results has shown that any successful effort must involve the Congress as a partner. In fact, the administration acknowledged that performance and accountability are shared responsibilities that must involve the Congress. It will only be through the continued attention of the Congress, the administration, and federal agencies that progress can be sustained and, more importantly, accelerated. The Congress has, in effect, served as the institutional champion for many previous performance management initiatives, such as GPRA and the CFO Act, by providing a consistent focus for oversight and reinforcement of important policies. Ultimately, the success of the PART initiative will be reflected in whether and how the Congress uses the results of these reviews in the congressional budget, appropriations, authorization, and oversight processes. As a key user, the Congress also needs to be considered a partner in shaping the PART review process at the outset. More generally, effective congressional oversight can help improve federal performance by examining the program structures agencies use to deliver products and services to ensure that the best, most cost-effective mix of strategies is in place to meet agency and national goals. As part of this oversight, the Congress should consider the associated policy, management, and policy implications of cross-cutting programs. Given this environment, the Congress should also consider the need for mechanisms that allow it to more systematically focus its oversight on problems with the most serious and systemic weaknesses and risks. At present, the Congress has no direct mechanism to provide a congressional perspective on governmentwide performance issues. The Congress has no established mechanism to articulate performance goals for the broad missions of government, to assess alternative strategies that offer the most promise for achieving these goals, or to define an oversight agenda targeted on the most pressing cross-cutting performance and management issues. The Congress might consider whether a more structured oversight mechanism is needed to permit a coordinated congressional perspective on governmentwide performance matters. Such a process might also facilitate congressional input into the OMB PART initiative. For example, although the selection of programs and areas for review is ultimately the President’s decision, such choices might be informed and shaped by congressional views and perspectives on performance issues. One possible approach would involve developing a congressional performance resolution identifying the key oversight and performance goals that the Congress wishes to set for its own committees and for the government as a whole. Such a resolution could be developed by modifying the current congressional budget resolution, which is already organized by budget function. Initially, this may involve collecting the “views and estimates” of authorization and appropriations committees on priority performance issues for programs under their jurisdiction and working with such cross-cutting committees as the House Committee on Governmental Reform and the House Committee on Rules. Obviously, a “congressional performance resolution” linked to the budget resolution is only one approach to achieve the objective of enhancing congressional oversight, but regardless of the approach taken, the Congress should assess whether its current structures and processes are adequate to take full advantage of the benefits arising from the reform agenda under way in the executive branch. Ultimately, what is important is not the specific approach or process, but rather the intended result of helping the Congress better promote improved fiscal, management, and program performance through broad and comprehensive oversight and deliberation. | This testimony discusses efforts to link resources to results--also known as "performance budgeting." During the past decade, Congress and several administrations have put in place a structure for increasing the focus on and accountability for government performance. Federal agencies have been working to carry out the Government Performance Act, which requires the development of periodic strategic and annual performance plans and reports. Absent structural change in a number of major entitlement programs, budgetary flexibility will continue to decline and eventually disappear--while demands for new federal resources to address such emerging challenges as homeland security and other issues become more compelling and pressing. Given the country's longer-range fiscal imbalance, there is also a need to broaden the measures and focus of the federal budget process to accommodate these goals. The nation's fiscal challenges escalate rapidly just beyond the 10-year budget projection period. As a result, new metrics and mechanisms are needed to better highlight the longer-term implications of existing programs and proposed new fiscal commitments. Furthermore, in order to address emerging challenges, it is necessary to address both retirement and health programs encumbering the nation's fiscal future, in addition to reexamining the base of existing programs--both discretionary programs and other entitlements--to free up resources to address new needs in a rapidly changing society. Such an examination should be cross-cutting and comprehensive in nature--all relevant policy tools and federal programs, including tax preferences, should be "on the table" in addressing such policy areas as low-income housing or health care financing and delivery. Although such a comprehensive reassessment will take time and may have to be addressed in phases, it is critically important that it occur. An extensive public education effort will be required to fully inform the American people about a long-term outlook under current policy portfolio as well as the alternative choices that are available. |
In the last decade, weapon systems have increasingly been developed, produced, and marketed internationally through government-sponsored cooperative development programs and a variety of industry linkages. These linkages include international subcontracting, joint ventures, teaming arrangements, and cross-border mergers and acquisitions. Also, the Department of Defense (DOD) and other agencies have shared certain highly classified information with allied governments. U.S. government policy allows foreign investment as long as it is consistent with national security interests. Foreign companies from many countries have acquired numerous U.S. defense companies and have legitimate business interests in them. Some of these foreign-owned companies are working on highly classified defense contracts, such as the B-2, the F-117, the F-22, and military satellite programs. Recognizing that undue foreign control or influence over management or operations of companies working on sensitive classified contracts could compromise classified information or impede the performance of classified contracts, DOD requires that foreign-owned U.S. firms operate under control structures known as voting trusts, proxy agreements, and special security agreements (SSA). Each of these agreements requires that the foreign owners select and DOD approve cleared U.S. citizens to be placed on the board of directors of the foreign-owned company to represent DOD’s interests by ensuring against (1) foreign access to classified information without a clearance and a need to know and (2) company actions that could adversely affect performance on classified contracts. In February 1995, the government issued the National Industrial Security Program Operating Manual (NISPOM) to replace the DOD Industrial Security Manual and various agencies’ industrial security requirements. The NISPOM’s section dealing with foreign ownership, control, or influence (FOCI) contains many provisions on voting trusts, proxy agreements, and SSAs similar to those in the DOD Industrial Security Regulation (ISR). The ISR will continue to apply in its current form until it is amended to reflect the NISPOM. Both the ISR and NISPOM require a company to obtain a facility clearance before it can work on a classified DOD contract and prescribe procedures for defense contractors to protect classified information entrusted to them. DOD’s policy provides that a firm is ineligible for a facility clearance if it is under FOCI. However, such a firm may be eligible for a facility clearance if actions are taken to effectively negate or reduce associated risks to an acceptable level. When the firm is majority foreign-owned, the control structures used to negate or reduce such risks include voting trusts, proxy agreements, and SSAs. The Defense Investigative Service (DIS) administers the DOD Industrial Security Program and is required to conduct compliance reviews of defense contractors operating under voting trusts, proxy agreements, and SSAs. This oversight function requires a DIS security inspection of the cleared facility every 6 months and an annual FOCI review meeting between DIS and the trustees of the foreign-owned firm. These reviews are aimed at ensuring compliance with special controls, practices, and procedures established to insulate the facility from foreign interests. Under a voting trust agreement, the foreign owners transfer legal title to the stock of the foreign-owned U.S. company to U.S. citizen trustees. Under the ISR and NISPOM, voting trusts must provide trustees with complete freedom to exercise all prerogatives of ownership and act independently from the foreign owners. Under the ISR and NISPOM, five actions may require prior approval by the foreign owner: the sale or disposal of the corporation’s assets or a substantial part thereof; pledges, mortgages, or other encumbrances on the capital stock of the corporate mergers, consolidations, or reorganization; the dissolution of the corporation; or the filing of a bankruptcy petition. Under the ISR, the trustees were to act independently without consultation with, interference by, or influence from the foreign owners, but the NISPOM allows for consultation between the trustees and foreign owners. The proxy agreement is essentially the same as the voting trust, with the exception of who holds title to the stock. Under the voting trust, the title to the stock is transferred to the trustees. Under the proxy agreement, the owners retain title to the stock, but the voting rights of the stock are transferred to the DOD-approved proxy holders by a proxy agreement. The powers and responsibilities of the proxy holders are the same as those of the trustees under a voting trust. From a security or control perspective, we saw no difference between the voting trust and the proxy agreement. DOD and company officials stated that from the companies’ perspective, the difference between these two agreements is largely a tax issue. The third type of control structure for majority foreign-owned firms is the SSA. Unlike a voting trust or proxy agreement, the SSA allows representatives of the foreign owner to be on the U.S. contractor’s board of directors. This representative, known as an inside director, does not need a DOD security clearance and can be a foreign national. In contrast, outside directors are U.S. citizens and must be approved by and obtain security clearances from DOD. Under DOD policy, outside directors are to ensure that classified information is protected from unauthorized or inadvertent access by the foreign owners and that the U.S. company’s ability to perform on classified contracts is not adversely affected by foreign influence over strategic decision-making. Because SSAs allow the foreign owners a higher potential for control over the U.S. defense contractor than proxies or voting trusts, firms operating under SSAs are generally prohibited from accessing highly classified information such as Top Secret and Sensitive Compartmented Information. However, DOD can grant exceptions to this prohibition and can award contracts at these highly classified levels if it determines it is in the national interest. DOD. The visitation agreement was to identify the representatives of the foreign owners allowed to visit the cleared U.S. firm, the purposes for which they were allowed to visit, the advance approval that was necessary, and the identity of the approval authority. In 1993, DOD eliminated visitation agreements as separate documents and incorporated visitation control procedures as a section of each voting trust, proxy agreement, and SSA. Voting trust agreements, proxy agreements, SSAs, and their attendant visitation agreements are negotiated between the foreign-owned company and DOD. Although DOD has boilerplate language that can be adopted, according to a DOD official, many cases have unique circumstances that call for flexible application of the ISR provisions. DOD’s flexible approach leads to negotiations that can result in company-specific agreements containing provisions that provide stronger or weaker controls. Generally, the foreign owners negotiate to secure the least restrictive agreements possible. DOD has approved more lenient visitation agreements and procedures over time. A DOD official explained that DOD’s flexible approach to FOCI arrangements and the resulting negotiations have probably caused the visitation controls to become relaxed. Each negotiated visitation agreement that relaxed controls became the starting point for subsequent negotiations on new agreements as the foreign-owned companies’ lawyers would point to the last visitation agreement as precedent. We recognize the need to tailor the agreements to specific company circumstances and to permit international defense work, but the lack of a baseline set of controls in the agreements made DIS inspections very difficult, according to DIS inspectors. Almost all the foreign-owned U.S. firms we reviewed possessed unclassified information and technologies that are export-controlled by the Departments of State and Commerce. DOD deemed some of these technologies to be militarily critical, such as carbon/carbon material manufacturing technology and flight control systems technology. Many classified defense contracts involve classified applications of unclassified export-controlled items and technologies. The ISR and most agreements were not designed to protect unclassified export-controlled information. As such, DIS does not review the protection of unclassified export-controlled technology during its inspections of cleared contractors. In fact, the U.S. government has no established means to monitor compliance with and ensure enforcement of federal regulations regarding the transfer of export-controlled technical information. In light of what is known about the technology acquisition and diversion intentions of certain allies (see ch. 2) and the high degree of contact with foreign interests at foreign-owned U.S. defense contractors (see ch. 3), enforcement of export control regulations is important. The new NISPOM reflects this concern and requires trustees in future voting trusts, proxy agreements, and SSAs to take necessary steps to ensure the company complies with U.S. export control laws. As of August 1994, 54 foreign-owned U.S. defense contractors were operating under voting trusts, proxy agreements, or SSAs. Six of these companies operate under voting trusts, 15 under proxy agreements, and 33 under SSAs. These 54 firms held a total of 657 classified contracts, valued at $5.4 billion. The largest firm operating under these agreements (as measured by the value of the classified contracts it held) is a computer services company that operates under a proxy agreement and held classified contracts valued at $2.5 billion. The foreign owners of the 54 firms are from Australia, Austria, Canada, Denmark, France, Germany, Israel, Japan, the Netherlands, Sweden, Switzerland, and the United Kingdom. Currently, three of the companies are wholly or partially owned by foreign governments. Our review was conducted at the request of the former Chairman and Ranking Minority Member, Subcommittee on Oversight and Investigation, House Committee on Armed Services (now the House Committee on National Security). Our objective was to assess the structure of voting trusts, proxy agreements, and SSAs and their implementation in the prevention of unauthorized disclosure of classified and export-controlled information to foreign interests. We did not attempt to determine whether unauthorized access to classified or export-controlled data/technology actually occurred. Rather, we examined the controls established in the ISR, the draft NISPOM, and the agreements’ structures and the way they were implemented at each of 14 companies we selected to review. We discussed security issues involving foreign-owned defense contractors and information security with officials from the Office of the Deputy Assistant Secretary of Defense (Counterintelligence, Security Countermeasures and Spectrum Management); DIS; and information security officials from the Air Force, the Army, and the Navy. We also discussed the performance of Special Access and Sensitive Compartmented contracts by foreign-owned companies with an official from the office of the Assistant Deputy Under Secretary of Defense (Security Policy). To obtain information on the threat of foreign espionage against U.S. defense industries, we interviewed officials and reviewed documents from the Central Intelligence Agency (CIA), Defense Intelligence Agency (DIA), and Federal Bureau of Investigation (FBI). In selecting the 14 companies for our judgmental sample, we included 5 companies that were wholly or partially owned by foreign governments. We selected the nine additional foreign-owned firms on the basis of (1) the sensitivity of the information they held, (2) agreement types, (3) country of origin, and (4) geographic location. One company we reviewed operated under a voting trust, five operated under proxy agreements, and six operated under SSAs. In addition, one firm transitioned from an SSA to a proxy agreement during our review, and we found that another firm operated under a different control structure, a memorandum of agreement (MOA). Table 1.1 shows the countries of ownership and agreement type of the companies we reviewed. This judgmental sample reflects the distribution of agreement type and country of ownership of the 54 companies operating under voting trusts, proxy agreements, and SSAs. However, due to the small size of our sample and the nonrandom nature of its selection, the results of our review cannot be projected to the universe of all companies operating under these agreements. We were initially told that an aerospace company operated under an SSA, and selected the company for our sample based on foreign government ownership of companies that are its partial owners. We subsequently learned that the company operated under a unique arrangement—an MOA . Because of the foreign government ownership component and the sensitivity of the information accessed by this aerospace company, we retained the company in our sample. When we present statistics in our report on the number of companies operating under voting trusts, proxy agreements, and SSAs and the number of contracts they hold and the contracts’ value, this company is not included in those numbers.However, we include the company in the discussions of control structures and their implementation (see chs. 3 and 4). In those instances, we specifically refer to the MOA. We compared the agreements of the 14 companies to each other and to boilerplate agreements provided by DIS. We also examined the agreements’ provisions to determine if they met the requirements of the ISR, the regulation in force at the time. We examined the visitation approval procedures and standard practice procedures manuals at the companies we reviewed to determine how the companies controlled foreign visitors and their access to the cleared facilities. We also interviewed company management, security personnel, and the company trustees to determine how they implemented the agreements. To assess implementation of the agreements, we reviewed annual company implementation reports, board of directors minutes, defense security committee minutes, visitation logs, international telephone bills, and various internal company correspondence and memorandums. To assess trustee involvement, we interviewed trustees and reviewed visitation approvals, as well as trustee meeting minutes, which showed the frequency of meetings, individuals’ attendance records, and topics of discussion. We also discussed each company’s implementation of the agreements and its information security programs with the cognizant DIS regional management and inspectors and reviewed their inspection reports. During our review, we had limited access to certain information. Foreign-owned contractors were working on various contracts and programs classified as Special Access Programs or Sensitive Compartmented Information. We were told by an official from the Office of the Assistant Deputy Under Secretary of Defense (Security Policy) that in some instances, it is not possible to acknowledge the existence of such contracts to individuals who are not specifically cleared for the program. As a result, we may not know of all foreign-owned firms involved in highly classified work. DOD provided written comments on a draft of this report. The complete text of those comments and our response is presented in appendix I. We performed our review from August 1992 through February 1995 in accordance with generally accepted government auditing standards. Some close U.S. allies actively seek to obtain classified and technical information from the United States through unauthorized means. Through its National Security Threat List program, the FBI National Security Division has determined that foreign intelligence activities directed at U.S. critical technologies pose a significant threat to national security. As we testified before the House Committee on the Judiciary in April 1992, sophisticated methods are used in espionage against U.S. companies.Unfortunately, the companies targeted by foreign intelligence agencies may not know—and may never know—that they have been targeted or compromised. “The risk in each of these situations is that foreign entities will exploit the relationship in ways that do not serve our overall national goals of preserving our technological advantages and curtailing proliferation. These goals generally include keeping certain nations from obtaining the technical capabilities to develop and produce advanced weapon systems and from acquiring the ability to counter advanced US weapon systems. In cases where U.S. national interests require the sharing of some of our capabilities with foreign governments, security safeguards must ensure that foreign disclosures do not go beyond their authorized scope. Safeguards must also be tailored to new proliferation threats and applied effectively to the authorization of foreign investment in classified defense industry and the granting of access by foreign representatives to our classified facilities and information.” Contractors owned by companies and governments of these same allied countries are working on classified DOD contracts under the protection of voting trusts, proxy agreements, and SSAs. These companies perform on DOD contracts developing, producing, and maintaining very sensitive military systems, and some of them have access to the most sensitive categories of U.S. classified information. Contracts requiring access to classified information at the levels shown in table 2.1 have been awarded to foreign-owned U.S. defense contractors. The following are examples of some sensitive contract work being performed by the 14 foreign-owned U.S. companies we reviewed: development of computer software for planning target selection and aircraft routes in the event of a nuclear war (a Top Secret contract); maintenance of DOD’s Worldwide Military Command and Control System ((WWMCCS) - the contract was classified TS, SCI, and COMSEC because of the information the computer-driven communications system contains); production of signal intelligence gathering radio receivers for the U.S. production of command destruct receivers for military missiles and National Aeronautics and Space Administration rockets (to destroy a rocket that goes off course); production of carbon/carbon composite Trident D-5 missile heat shields; production of the flight controls for the B-2, the F-117, and the F-22. Some of the contracts these foreign-owned U.S. companies are working on are Special Access Programs. Due to the special access requirements of these contracts, the contractors could not tell us what type of work they were doing, what military system the work was for, or even the identity of the DOD customer. Some of the contracts performed by companies we examined involve less sensitive technologies. For example, one company we visited had contracts requiring access to classified information because it cast valves for naval nuclear propulsion systems, and it needed classified test parameters for the valves. Another firm operating under an SSA is required to have a Secret-level clearance because it installs alarm systems in buildings that hold classified information. In addition to classified information, most of the 14 foreign-owned companies we reviewed possessed unclassified technical information and hardware items that are export-controlled by the State or Commerce Departments. DOD deemed many of these technologies to be militarily critical. Reports and briefings provided during 1993 by U.S. intelligence agencies showed a continuing economic espionage threat from certain U.S. allies.Eight of the 54 companies operating under voting trusts, proxy agreements, and SSAs and working on classified contracts are owned by interests from one of these countries. The following are intelligence agency threat assessments and examples illustrating this espionage. According to a U.S. intelligence agency, the government of Country A conducts the most aggressive espionage operation against the United States of any U.S. ally. Classified military information and sensitive military technologies are high-priority targets for the intelligence agencies of this country. Country A seeks this information for three reasons: (1) to help the technological development of its own defense industrial base, (2) to sell or trade the information with other countries for economic reasons, and (3) to sell or trade the information with other countries to develop political alliances and alternative sources of arms. According to a classified 1994 report produced by a U.S. government interagency working group on U.S. critical technology companies, Country A routinely resorts to state-sponsored espionage using covert collection techniques to obtain sensitive U.S. economic information and technology. Agents of Country A collect a variety of classified and proprietary information through observation, elicitation, and theft. The following are intelligence agency examples of Country A information collection efforts: An espionage operation run by the intelligence organization responsible for collecting scientific and technologic information for Country A paid a U.S. government employee to obtain U.S. classified military intelligence documents. Several citizens of Country A were caught in the United States stealing sensitive technology used in manufacturing artillery gun tubes. Agents of Country A allegedly stole design plans for a classified reconnaissance system from a U.S. company and gave them to a defense contractor from Country A. A company from Country A is suspectecd of surreptitiously monitoring a DOD telecommunications system to obtain classified information for Country A intelligence. Citizens of Country A were investigated for allegations of passing advanced aerospace design technology to unauthorized scientists and researchers. Country A is suspected of targeting U.S. avionics, missile telemetry and testing data, and aircraft communication systems for intelligence operations. It has been determined that Country A targeted specialized software that is used to store data in friendly aircraft warning systems. Country A has targeted information on advanced materials and coatings for collection. A Country A government agency allegedly obtained information regarding a chemical finish used on missile reentry vehicles from a U.S. person. According to intelligence agencies, in the 1960s, the government of Country B began an aggressive and massive espionage effort against the United States. The 1994 interagency report on U.S. critical technology companies pointed out that recent international developments have increased foreign intelligence collection efforts against U.S. economic interests. The lessening of East-West tensions in the late 1980s and early 1990s enabled Country B intelligence services to allocate greater resources to collect sensitive U.S. economic information and technology. Methods used by Country B are updated versions of classic Cold War recruitment and technical operations. The Country B government organization that conducts these activities does not target U.S. national defense information such as war plans, but rather seeks U.S. technology. The motivation for these activities is the health of Country B’s defense industrial base. Country B considers it vital to its national security to be self-sufficient in manufacturing arms. Since domestic consumption will not support its defense industries, Country B must export arms. Country B seeks U.S. defense technologies to incorporate into domestically produced systems. By stealing the technology from the United States, Country B can have cutting-edge weapon systems without the cost of research and development. The cutting-edge technologies not only provide superior weapon systems for Country B’s own use, but also make these products more marketable for exports. It is believed that Country B espionage efforts against the U.S. defense industries will continue and may increase. Country B needs the cutting-edge technologies to compete with U.S. systems in the international arms market. The following are intelligence agency examples of Country B information collection efforts: In the late 1980s, Country B’s intelligence agency recruited agents at the European offices of three U.S. computer and electronics firms. The agents apparently were stealing unusually sensitive technical information for a struggling Country B company. This Country B company also owns a U.S. company operating under a proxy agreement and performing contracts for DOD classified as TS, SAP, SCI, and COMSEC. Country B companies and government officials have been investigated for suspected efforts to acquire advanced abrasive technology and stealth-related coatings. Country B representatives have been investigated for targeting software that performs high-speed, real-time computational analysis that can be used in a missile attack system. Information was obtained that Country B targeted a number of U.S. defense companies and their missile and satellite technologies for espionage efforts. Companies of Country B have made efforts, some successful, to acquire targeted companies. The motivation for Country C industrial espionage against the United States is much like that of Country B: Country C wants cutting-edge technologies to incorporate into weapon systems it produces. The technology would give Country C armed forces a quality weapon and would increase the weapon’s export market potential. The Country C government intelligence organization has assisted Country C industry in obtaining defense technologies, but not as actively as Country B intelligence has for its industry. One example of Country C government assistance occurred in the late 1980s, when a Country C firm wanted to enter Strategic Defense Initiative work. At that time, the Country C intelligence organization assisted this firm in obtaining applicable technology. The Country D government has no official foreign intelligence service. Private Country D companies are the intelligence gatherers. They have more of a presence throughout the world than the Country D government. However, according to the 1994 interagency report, the Country D government obtains much of the economic intelligence that Country D private-sector firms operating abroad collect for their own purposes. This occasionally includes classified foreign government documents and corporate proprietary data. Country D employees have been quite successful in developing and exploiting Americans who have access to classified and proprietary information. The following are examples of information collection efforts of Country D: Firms from Country D have been investigated for targeting advanced propulsion technologies, from slush-hydrogen fuel to torpedo target motors, and attempting to export these items through intermediaries and specialty shipping companies in violation of export restrictions. Individuals from Country D have been investigated for allegedly passing advanced aerospace design technology to unauthorized scientists and researchers. Electronics firms from Country D directed information-gathering efforts at competing U.S. firms in order to increase the market share of Country D in the semiconductor field. Intelligence community officials stated that they did not have indications that the intelligence service of Country E has targeted the United States or its defense industry for espionage efforts. However, according to the 1994 interagency report, in 1991 the intelligence service of this country was considering moving toward what it called “semi-overt” collection of foreign economic intelligence. At that time, Country E’s intelligence service reportedly planned to increase the number of its senior officers in Washington to improve its semi-overt collection—probably referring to more intense elicitation from government and business contacts. The main counterintelligence concern cited by one intelligence agency regarding Country E is not that its government may be targeting the United States with espionage efforts, but that any technology that does find its way into Country E will probably be diverted to countries to which the United States would not sell its defense technologies. The defense industry of this country is of particular concern in this regard. It was reported that information diversions from Country E have serious implications for U.S. national security. Large-scale losses of technology were discovered in the early 1990s. Primary responsibility for industrial security resides in a small staff of the government of Country E. It was reported that this limited staff often loses when its regulatory concerns clash with business interests. The intelligence agency concluded that the additional time needed to eradicate the diversion systems will consequently limit the degree of technological security available for several years. The question suggested by this situation is, if technology from a U.S. defense contractor owned by interests of Country E is transferred to Country E, will this U.S. defense technology then be diverted to countries to which the United States would not sell? Foreign ownership or control of U.S. firms performing classified contracts for DOD poses a special security risk. The risk includes unauthorized or inadvertent disclosure of classified information available to the U.S. firm. In addition, foreign owners could take action that would jeopardize the performance of classified contracts. To minimize the risks, the ISR and NISPOM require voting trusts and proxy agreements to insulate the foreign owners from the cleared U.S. defense firm or SSAs to limit foreign owners’ participation in the management of the cleared U.S. firm. The ISR also required visitation agreements to control visitation between foreign owners and their cleared U.S. firms. The new industrial security program manual does not address visitation control agreements or procedures. DOD eliminated separate visitation agreements in favor of visitation procedures in the security agreements themselves. In May 1992, a former Secretary of Defense testified before the House Committee on Armed Services that under proxy agreements and voting trusts, the foreign owners of U.S. companies working on classified contracts had “virtually no say except if somebody wants to sell the company or in very major decisions.” He indicated that for the purposes of the foreign parent company, proxy agreements and voting trusts are essentially “blind trusts.” Further, he testified that a number of companies were “functioning successfully” under SSAs. Of the three types of arrangements used to negate or reduce risks in majority foreign ownership cases, SSAs were the least restrictive. Accordingly, SSA firms pose a somewhat higher risk associated with classified work. The ISR and the NISPOM generally prohibit SSA firms from being involved in Top Secret and other highly sensitive contracts, but allow for exceptions if DOD determines they are in the national interest. SSA firms we reviewed were working on 47 contracts classified as TS, SCI, SAP, RD, and COMSEC. In addition, we observed that ISR-required visitation agreements permitted significant contact between the U.S. firms and the foreign owners. Unlike voting trusts and proxy agreements, which insulate foreign owners from the management of the cleared firm, SSAs allow foreign owners to appoint a representative to serve on the board of directors. Called an “inside director,” this individual represents the foreign owners and is often a foreign national. The inside director is to be counterbalanced by DOD-approved directors, called the “outside directors.” The principal function of the outside directors is to protect U.S. security interests. Inside directors cannot hold a majority of the votes on the board, but because of their connection to the foreign owners, their views about the company’s direction on certain defense contracts or product lines reflect those of the owners. Depending on the composition of the board, the inside director and the company officers on the board could possibly combine to out vote the outside directors. In addition, unlike voting trusts and proxy agreements, the SSAs we examined allow the foreign owner to replace “any member of the [SSA company] Board of Directors for any reason.” DOD recently provided us with new boilerplate SSA language that will require DIS to approve the removal of a director. Foreign owners of SSA firms can also exercise significant influence over the U.S. companies they own in other ways. For example, at two SSA firms we examined, the foreign owners used export licenses to obtain unclassified technology from the U.S. subsidiary that was vital to the U.S. companies’ competitive positions. Officers of the U.S. companies stated that they did not want to share these technologies, but the foreign owners required them to do so. Subsequently, one of these U.S. companies faced its own technology in a competition with its foreign owner for a U.S. Army contract. Because of the additional risk previously mentioned, companies operating under SSAs are normally ineligible for contracts allowing access to TS, SAP, SCI, RD, and COMSEC information. However, during our review, 12 of the 33 SSA companies were working on at least 47 contracts requiring access to this highly classified information. Before June 1991, DOD reviewed an SSA firm to determine whether it would be in the national interest to allow the firm to compete for contracts classified TS, SCI, SAP, RD, or COMSEC. New guidance was issued in June 1991 requiring the responsible military service to make a national interest determination each time a highly classified contract was awarded to an SSA firm. We found only one contract-specific national interest determination had been written since the June 1991 guidance. According to DOD officials, the other 46 highly classified contracts performed by SSA companies predated June 1991 or were follow-on contracts to contracts awarded before June 1991. Since information on some contracts awarded to SSA companies is under special access restrictions, DOD officials may be authorized to conceal the contracts from people not specifically cleared for access to the program. We, therefore, could not determine with confidence if the requirement for contract-specific national interest determinations was carried out. One company performs on contracts classified as TS, SCI, SAP, RD, and COMSEC under an alternative arrangement called an MOA. The MOA (a unique agreement) was created in 1991 because the company has classified DOD contracts and, although foreign interests do not hold a majority of the stock, they own 49 percent of the company and have special rights to veto certain actions of the majority owners. Normally, under the ISR, minority foreign investment in a cleared U.S. defense contractor required only a resolution of the board of directors stating that the foreign interests will not require, nor be given, access to classified information. DOD did not consider the board resolution appropriate for this case, partially because of the board membership of the foreign owners and their veto rights over certain basic corporate decisions. The company board of directors consists of six representatives appointed by the U.S. owners and one representative for each of the four foreign minority interests. Any single foreign director can block any of 16 specified actions of the board of directors. These actions include the adoption of a company strategic plan or annual budget as well as the development of a new product that varies from the lines of business set forth in the strategic plan. In addition, any two foreign directors can block an additional 11 specified actions. These veto rights could give the foreign interests significantly more control and influence over the U.S. defense contractor in certain instances than would be permitted in an SSA. In 1991, DIS objected to an agreement less stringent than an SSA because of the veto rights of the foreign directors and, unlike an SSA, an MOA does not require any DOD-approved outside members on the board of directors. However, the Office of the Under Secretary of Defense for Policy determined that the company would not be under foreign domination and that the MOA was a sufficient control. DOD reexamined the MOA during a subsequent (1992) foreign investment in the company and made some modifications. Although the MOA does not provide for outside members on the board, it does require DOD-approved outside members on a Defense Security Committee to oversee the protection of classified and export-controlled information. The first version of the MOA did not give the outside security committee members the right to attend any board of directors meetings. Under the revised (1992) version of the MOA, the outside security committee members still do not have general rights to attend board meetings; however, their attendance at board meetings is required if the foreign interests are to exercise their veto rights. Also, the first version of the MOA did not require any prior security committee approval for representatives of the foreign interests to visit the cleared U.S. defense contractor. The newer version requires prior approval when the visits concern performance on a classified contract. “In every case where a voting trust agreement, proxy agreement, or special security agreement is employed to eliminate risks associated with foreign ownership, a visitation agreement shall be executed . . .” “The visitation agreement shall provide that, as a general rule, visits between the foreign stockholder and the cleared U.S. firm are not authorized; however, as an exception to the general rule, the trustees, may approve such visits in connection with regular day-to-day business operations pertaining strictly to purely commercial products or services and not involving classified contracts.” The visitation agreements are to guard against foreign owners or their representatives obtaining access to classified information without a clearance and a need to know. At all 14 companies we reviewed, visitation agreements permitted the foreign owners and their representatives to visit regarding military and dual-use products and services. The visitation agreements permitted visits to the U.S. company (1) in association with classified contracts if the foreign interests had the appropriate security clearance and (2) under State or Commerce Department export licenses. The large number of business transactions between the U.S. defense contractors and their foreign owners granted representatives of the foreign owners frequent entry to the cleared U.S. facilities. Eight of the 14 firms we reviewed had contractual arrangements with their foreign owners that led to a high (often daily) degree of contact. In one case, the U.S. company sold and serviced equipment produced by the foreign firm, so the two firms had almost continual contact at the technician level to obtain repair parts and technical assistance. During a 3-month period in 1993, this company approved 167 extended visit authorizations. At one SSA firm we reviewed, 236 visits occurred between the U.S. firm and representatives of the foreign owners over a 1-year period, averaging about 7 days per visit. At a proxy company, there were 322 approved requests for contact with representatives of the owners during a 1-year period; 94 of the requests were blanket requests for multiple contacts over the subsequent 3-month period. Not all foreign-owned defense contractors had this degree of contact with representatives of their foreign owners. One SSA firm had only 44 visits with representatives of its foreign owners during a 1-year period. Some visitation agreements permitted long-term visits to the cleared U.S. companies by employees of the foreign owners. Five companies we reviewed had employees of the foreign owners working at the cleared U.S. facilities. In a number of these cases, they were technical and managerial staff working on military and dual-use systems and products under approved export licenses. One company covered by a proxy agreement had a foreign national technical manager from the foreign parent firm review the space and military technologies of the U.S. defense contractor to determine if there were opportunities for technical cooperation with the foreign parent firm. At another firm we reviewed, representatives of the foreign partners are permanently on site. At yet another company, a foreign national employee of the foreign parent company worked on a computer system for the B-2 bomber and had access to export-controlled information without the U.S. company obtaining the required export license. Post-visit contact reports are the primary means for DIS and the trustees to monitor the substance of contacts between the foreign-owned U.S. contractor and representatives of its foreign owners. Such records should be used to determine if the contact with representatives of the foreign owners was appropriate and in accordance with the ISR and the visitation agreement. Some visitation agreements do not require employees of the U.S. firm to document and report the substance of the discussions with employees of the foreign parent firm. At three of the firms we reviewed, the only record of contact between employees of the U.S. company and the foreign owners were copies of forms approving the visit. However, at other foreign-owned U.S. defense contractors, post-visit contact reports were available for DIS to review when it inspected the firms and when DIS held its annual agreement compliance review with the foreign-owned companies. The ISR, the NISPOM, and most of the visitation agreements we reviewed do not require telephonic contacts between the U.S. defense contractor and representatives of its foreign owners to be controlled and documented. One of the firms covered by a proxy agreement documented 1,912 telephonic contacts between the U.S. company and representatives of its foreign owners for a 1-year period. After examining telephone bills at other companies, we found 1 SSA company had over 550 telephone calls to the country of the foreign owners in 1 month. Company officials said these calls were primarily to representatives of the foreign owners. In contrast, our review of telephone bills at another SSA company showed only 47 telephone calls to the country of the foreign owners during 1 month in 1993. If an individual intends to breach security, it would be easier to transfer classified or export-controlled information by telephone, facsimile, or computer modem than it would be in person. Documenting telephone contacts would not prevent such illegal activity, but might make it easier to detect. During our review, DIS also recognized this and asked companies to establish a procedure for documenting telephonic contacts with representatives of their foreign owners. We were initially told the NISPOM section dealing with foreign ownership, control, and influence would replace the FOCI section of the ISR. The new manual does not address visitation control agreements or procedures to restrict visitation between the cleared U.S. defense contractor and representatives of its foreign owners. Instead, it appears to allow unlimited visitation. However, in its comments on our report, DOD stated that the ISR will be retained and revised to reflect the NISPOM. DOD also said that the revised ISR will require visitation approval procedures, but instead of separate visitation agreements, these procedures will be incorporated into each voting trust, proxy agreement, and SSA. Under the ISR and the new NISPOM, majority foreign-owned facilities cleared to perform classified contracts must enter into agreements with DOD to negate, or at least reduce to an acceptable level, the security risks associated with foreign ownership, control, and influence. Voting trusts and proxy agreements are designed to insulate cleared U.S. defense firms from their foreign owners. SSAs limit the foreign owners’ participation in company management. None of these security arrangements is intended to deny U.S. defense contractors the opportunity to do business with their foreign owners. However, the frequent contact engendered by legitimate unclassified business transactions can heighten the risk of unauthorized access to classified information. Also, existing visitation agreements and procedures permit a high degree of contact. Often this contact is at the technical and engineer level where U.S. classified information could most easily be compromised. The draft NISPOM does not address visitation controls, but DOD has stated that a visitation approval procedures section will be included in the revised ISR. At a few of the 14 companies we reviewed, DOD-approved trustees were actively involved in company management and security oversight. At most of the companies, however, the trustees did little to protect classified or export-controlled information from access by foreign owner representatives. At proxy agreement companies, we observed cases where foreign owners were exercising more control than the ISR allowed and foreign-owned U.S. defense firms whose independence was degraded because of their financial reliance on the foreign owners. We also observed that some DOD-approved trustees appeared to have conflicts of interest. Finally, DIS did not tailor its inspections of these foreign-owned facilities to specifically address FOCI issues or the implementation of the control agreements, but has recently promulgated new inspection guidelines to address these issues. Some DOD-approved trustees were more actively involved in management and security oversight than others. For example, at some companies, the trustees retained, and did not delegate, their responsibility for approving all visits by representatives of the foreign owners as required in the visitation agreements. The more active trustees also reviewed post-visit contact reports and interviewed a sample of technical staff who met with the foreign owners’ representatives to ascertain the substance of their discussions, questioned potentially adverse business conditions caused by arrangements with the foreign parent, and attended business meetings at the company more often than quarterly. At most of the companies we reviewed, however, the trustees (or proxy holders or outside directors) did little to ensure that company management was not unduly influenced by the foreign owners or that the control structures in the security agreements were being properly implemented. Instead, they viewed their role as limited to ensuring that policies exist within the company to protect classified information. At six of the firms we reviewed, monitoring the security implementation and the business operations of the company by the trustees ranged from limited to almost nonexistent. In only two of the firms did the trustees appear to be actively involved in company management and security oversight. The need for trustee oversight of the business management of foreign-owned companies was highlighted at one SSA firm we examined. At this company, the foreign owners exercised their SSA powers to replace two successive director/presidents of the U.S. company. The first claimed he was terminated because he attempted to enforce the SSA. The second president contested his dismissal because the outside directors were not given prior notice of the owners’ intent to replace him. The owners stated that in both cases, poor business performance was the cause for termination and, in these cases, the outside directors agreed. Nevertheless, outside directors need to remain actively involved in monitoring the companies’ business management to ensure that foreign owners exercise these powers only for legitimate business reasons and not for reasons that could jeopardize classified information and contracts. Implementation and monitoring of the information security program was usually left to the facility security officer (FSO), an employee of the foreign-owned U.S. company. At the companies we reviewed, a variety of personnel served as FSO, including a general counsel, secretaries, and professional security officers. In any case, the FSO often performed the administrative functions of security and lacked the knowledge to determine the proper parameters for the substance of classified discussions, given a cleared foreign representative’s need to know. This limitation and the FSO’s potential vulnerability as an employee of the foreign-owned company pose a risk without active trustee involvement. Another potential problem associated with trustees relinquishing implementation and monitoring responsibilities to the FSO was illustrated at an SSA firm we reviewed. At the SSA firm, the FSO wanted to establish a new security procedure, but was overruled by the president of the foreign-owned U.S. defense company. In this instance, the FSO had enough confidence in the outside directors to go to them and complain. The outside directors agreed with the need for the new control and required its implementation. In this case, the outside directors led the officials of the foreign-owned firm to believe that the new security measure was an outside director initiative. If the circumstances and individuals had been different, the FSO might have lacked the confidence to seek the assistance of the outside directors. At the foreign-owned companies we reviewed, trustees were paid between $1,500 and $75,000 a year. In return for this compensation, the usual trustee involvement was attendance at four meetings annually. Typically, one of the trustees is designated to approve requests for visits with representatives of the foreign owners. This additional duty involves occasionally receiving, reviewing, and transmitting approval requests by facsimile machine. The ISR requires that a trustee approve visitation requests. However, in most of the firms we reviewed, trustees only directly approved visits between senior management of the U.S. firm and the foreign parent firm. The FSO approved visits below this senior management level, including visits with the technical and engineering staff; the trustees only reviewed documentation of these visits during their quarterly trustee meetings, if at all. In addition, when required, most post-visit contact reports lacked the detail needed for the trustees or DIS to determine what was discussed between the foreign-owned company and the owners’ representatives. Trustee inattention to contact at the technical level is of particular concern, since that is where most of the U.S. defense contractor’s technology is located, not in the board room where senior management officials are found. Trustees rarely visited or toured the foreign-owned company’s facility to observe the accessibility of classified or export-controlled information, except during prearranged tours at the time of their quarterly meetings. The trustees also rarely interviewed managerial and technical staff to verify the level and nature of their contact with employees of the foreign parent firm. Government officials suggested that trustees at two companies involve themselves in a higher degree of monitoring. Some flatly refused and stated that they have held important positions in government and industry and feel that it is not their role to personally provide such detailed oversight. The ISR requires that proxy holders and trustees of voting trusts “shall assume full responsibility for the voting stock and for exercising all management prerogatives relating thereto” and that the foreign stockholders shall “continue solely in the status of beneficiaries.” However, as an example of minimal proxy involvement, at one proxy company the three proxy holders only met twice a year. Only one of the three proxy holders was on the company’s board of directors, and the board had not met in person for 4 years. All board action was by telephone, and the board’s role was limited to electing company officers. The proxy holders’ were minimally involved in selecting and approving these company officials. The parent firm selected the current chief executive officer (CEO) of the company and the proxy holders affirmed this selection after questioning the parent firm about the individual’s background. The FSO was required to approve all visits to this firm by employees of the foreign parent rather than the proxy holders as required by the ISR. At the company we reviewed operating under an MOA, the Defense Security Committee consists of four company officials and the three outside members. These outside members visited the company only for the quarterly committee meetings. The president of the company, who is also the security committee chairman, set the meeting agenda and conducted the meetings. Further, his presentations to the outside members usually focused on current and future business activities rather than security matters. Any plant tours the outside members received were prearranged and concurrent with the quarterly meetings. There were no off-cycle visits to the company to inspect or monitor security operations. To eliminate the risks associated with foreign control and influence over foreign-owned U.S. defense contractors, the ISR requires that voting trust and proxy agreements “unequivocally shall provide for the exercise of all prerogatives of ownership by the trustees with complete freedom to act independently without consultation with, interference by, or influence from foreign stockholders.” “the trustees shall assume full responsibility for the voting stock and for exercising all management prerogatives relating thereto in such a way as to ensure that the foreign stockholders, except for the approvals just enumerated, [sale, merger, dissolution of the company; encumbrance of stock; filing for bankruptcy] shall be insulated from the cleared facility and continue solely in the status of beneficiaries.” However, at one of the proxy firms we reviewed, the foreign owners acted in more than the status of beneficiaries. The proxy firm’s strategic plan and annual budget were regularly presented to the foreign owners for review. At least once the foreign parent firm rejected a strategic plan and indicated that it would continue to object until the plan specified increased collaboration between the proxy firm and the foreign parent firm. At another time, the foreign owners had employees of this U.S. firm represent them in an attempt to acquire another U.S. aerospace firm more than 10 times the size of the proxy firm. Although decisions on mergers are within the rights of the foreign owners, during this acquisition effort, officers and employees of the U.S. defense contractor were operating at the direction of the foreign owners. In this case, because the parent firm directed staff of the proxy firm, it clearly acted as more than a beneficiary, the role to which foreign owners are limited under the ISR. Another proxy firm has a distribution agreement with its foreign owners that restricts the proxy firm to marketing electronic equipment and services to the U.S. government. In addition, the agreement will only allow the proxy firm to service hardware that is used on classified systems. Although this distribution agreement was approved by DIS at the time of the foreign acquisition, it controls the strategic direction of the proxy firm. The proxy firm reported to DIS that it is important for the survival of the U.S. company to be able to pursue business opportunities that are currently denied by the distribution agreement. The ISR states that a company operating under a proxy agreement “shall be organized, structured, and financed so as to be capable of operating as a viable business entity independent from the foreign stockholders.” During our review, we saw examples of firms that depended on their foreign owners for financial support or had business arrangements with the foreign owners that degraded the independence of the proxy firm. The president of one company operating under a proxy agreement told us that his company was basically bankrupt. His company is financed by banks owned by the government where the parent company is incorporated. The company’s foreign parent firm guarantees the loans, and two of the government banks are on the parent firm’s board of directors. The foreign owners paid several million dollars to the U.S. company to relocate one of its divisions. According to officials of the U.S. company, they could not otherwise have afforded such a move, nor could they have obtained bank loans on their own. Another proxy firm had loans from the foreign owners that grew to exceed the value of the proxy firm. One proxy holder said the company would probably have gone out of business without the loans. Even with the loans, the company’s financial position was precarious. It was financially weak, could not obtain independent financing, and was considerably burdened by making interest payments on its debt to the foreign owners. During our review, a DIS official acknowledged that DIS should have addressed the risk imposed by this indebtedness. Under the ISR provisions, voting trustees and proxy holders “shall be completely disinterested individuals with no prior involvement with either the facility or the corporate body in which it is located, or the foreign interest.” At one of the companies we reviewed, a proxy holder was previously involved as a director of a joint venture with the foreign owners. These foreign owners later nominated this individual to be their proxy holder. He withheld the information about his prior involvement from DIS at the time he became a proxy holder. After DIS became aware of this relationship, it concluded that this individual was ineligible to be a proxy holder and should not continue in that role. Thereafter, the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence wrote to the company about irregularities in proxy agreement implementation, such as allowing the foreign owners prerogatives that were not allowed under the proxy agreement. However, he did not address the appearance of a conflict of interest, and the individual has remained as a proxy holder. This same proxy holder is also now the part-time CEO of the foreign-owned U.S. defense firm and received an annual compensation of approximately $272,000 (as compared to the $50,000 proxy holder stipend) for an average of 8 days’ work per month in his dual role of CEO and proxy holder. This appears to be a second conflict of interest: as CEO his fiduciary duty and loyalty to the foreign-owned company takes primacy; as proxy holder, his primary responsibility is to protect DOD’s information security interests. In addition, at this company, the conflict between the proxy holders’ responsibility to DOD and their perceived fiduciary responsibility was illustrated during a DIS investigation into possible violations of the proxy agreement. Citing their fiduciary responsibility, the proxy holders refused to allow DIS investigators to interview employees without company supervision. The Assistant Secretary of Defense for Command, Control, Communications, and Intelligence found this action to be contrary to the firm’s contractual obligations under its security agreement with DOD. The company just discussed is not the only one where a proxy holder also holds the title of CEO. At another firm, the proxy holder’s salary as CEO is approximately $113,000 (as compared to the $22,000 proxy holder stipend). Again, there appears to be a conflict of interest because of the CEO’s fiduciary duty and loyalty to the foreign-owned company, and his responsibility to protect DOD’s information security interests. At another proxy firm, the lead proxy holder owns a consulting firm that has a contract with the foreign-owned U.S. company. In this case, there appears to be a conflict of interest because as proxy holder, his primary responsibility is to protect the information security interests of DOD, but as a consultant to the foreign-owned firm, it is in his interest to please the foreign-owned company. At another firm, the agreement requires that the outside members of the security committee be independent of the foreign investors and their shareholders. The French government owns 12-1/4 percent of this U.S. company. Even though the outside members of the security committee are to protect classified and export-controlled information from this foreign government, one outside member created the appearance of a conflict of interest by representing a French government-owned firm before DOD in its efforts to buy another cleared U.S. defense contractor. This outside member also created the appearance of a conflict of interest when his consulting firm became the Washington representative for a French government-owned firm in its export control matters with the State Department. Finally, the ISR does not expressly require that outside directors serving under an SSA comply with the independence standards applicable to voting trustees and proxy holders. The reason for this omission is not clear. However, all of the SSAs we reviewed stated that individuals appointed as outside directors can have “no prior employment or contractual relationship” with the foreign owners. Since the outside directors perform the same function as voting trustees and proxy holders in ensuring the protection of classified information and the continued ability of the cleared U.S. company to perform on classified contracts, it seems reasonable that they should also be disinterested parties when named to the board and should remain free of other involvement with the foreign owners during their period of service. DIS inspectors told us that their inspections of foreign-owned U.S. defense contractors vary little from the type of facility security inspections they do at U.S.-owned facilities. Their inspections concentrated on such items as classified document storage, amount and usage of classified information, and the number of cleared personnel and their continuing need for clearances. During the time of our review, DIS developed new guidelines for inspections of foreign-owned firms by its industrial security staff to specifically address foreign ownership issues. They call for the inspectors to examine issues such as changes to the insulating agreement, business relationships between the U.S. company and its foreign owners, foreign owner involvement in the U.S. company’s strategic direction, the number and nature of contacts with representatives of the foreign owners, and the number of foreign staff working at the facility. These guidelines were promulgated in September 1994. DIS is beginning to implement the new inspection guidelines. According to DIS officials at the regional and field office levels, before they use the new guidelines, they must educate the inspection staff on foreign ownership issues as well as how the issues should be addressed during their inspections. They also said that implementing these new inspection procedures would probably double the length of an inspection at the foreign-owned facilities. Currently, DIS must inspect each cleared facility twice a year, but it is having difficulty maintaining this inspection schedule. Industrial security inspectors are responsible for around 70 cleared facilities, and inspections at some larger facilities take a number of days. Doubling the inspection time at the foreign-owned facilities under the new guidelines might require some realignment of DIS resources. According to DOD officials, DIS inspections will occur no more often than annually under the NISPOM. We recommend that the Secretary of Defense develop and implement a plan to improve trustee oversight and involvement in the foreign-owned companies and to ensure the independence of foreign-owned U.S. defense contractors and their trustees from improper influence from the foreign owners. As part of this effort, the Secretary should make the following changes in the implementation of the existing security arrangements and under the National Industrial Security Program. 1. Visitation request approvals: The trustees should strictly adhere to the ISR visitation agreement provision that requires them to approve requests for visits between the U.S. defense contractor and representatives of its foreign owners. This duty should not be delegated to officers or employees of the foreign-owned firm. 2. Trustee monitoring: The trustees should be required to ensure that personnel of the foreign-owned firm document and report the substance of the discussions they hold with personnel of the foreign parent firm. The trustees should review these reports and ensure that the information provided is sufficient to determine what information passed between the parties during the contact. The trustees should also select at least a sample of contacts and interview the participants of the foreign-owned firm to ensure that the post-contact reports accurately reflect what transpired. 3. Trustee inspections: To more directly involve trustees in information security monitoring, the trustees should annually supervise an information security inspection of each of the cleared facilities. The results of these inspections should be included in the annual report to DIS. 4. FSO supervision: To insulate the FSO from influence by the foreign-owned firm and its foreign owners, the trustees should be empowered and required to review and approve or disapprove the selection of the FSO and all decisions regarding the FSO’s pay and continued employment. The trustees should also supervise the FSO to ensure an acceptable level of job performance, since trustees are charged with monitoring information security at the U.S. defense contractor. 5. Financial independence: To monitor the financial independence of the foreign-owned firm, the annual report to DIS should include a statement on any financial support, loans, loan guarantees, or debt relief from or through the foreign owners or the government of the foreign owners that have occurred during the year. 6. Trustee independence: To help avoid conflicts of interest for the trustees, require them to certify at the time of their selection, and then annually, that they have no prior or current involvement with the foreign-owned firm or its foreign owners other than their trustee position. This certification should include a statement that they are not holding and will not hold positions within the foreign-owned company other than their trustee position. It should be expressly stated that these independence standards apply equally to voting trustees, proxy holders, and outside directors of firms under SSAs. 7. Trustee duties: The selected trustees should be required to sign agreements acknowledging their responsibilities and the specific duties they are required to carry out those responsibilities, including those in numbers 1 through 4. The agreement should provide that DOD can require the resignation of any trustee if DOD determines that the trustee failed to perform any of these duties. This agreement should ensure that the trustees and the government clearly understand what is expected of the trustees to perform their security roles. DOD stated that it generally agreed with the thrust of our recommendations in this report, but did not agree that the specific actions we recommended were necessary, given DOD efforts to address the issues involved. DOD said it had addressed these issues through education, advice, and encouragement of trustees to take the desired corrective actions. We and DOD have both seen instances in which this encouragement has been rejected. Because of the risk to information with national security implications, we believe that requiring, rather than encouraging, the trustees to improve security oversight would be more effective. Therefore, we continue to believe our recommendations are valid and believe they should be implemented to reduce the security risks. DOD’s comments and our evaluation are presented in their entirety in appendix I. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated April 14, 1995. “As a general rule, visits between the Foreign Interest and the cleared Corporation are not authorized; however, the Proxy Holders may approve visits in connection with regular day-to-day business operations pertaining strictly to purely commercial products or services and not involving classified contracts or executive direction or managerial matters.” “As a general rule, visits between representatives of the Corporation and those of any Foreign Interest, are not authorized unless approved in advance by the designated Proxy Holder.” “Most agreements are silent on the authority of the DOD to terminate the arrangement or to dismiss a Proxy Holder, Trustee or outside director. While DIS is normally a party to Special Security Agreements, it is not a party to proxy or trust agreements and therefore lacks standing to intercede when appropriate.” While DIS is a party to SSAs, if faced with outside directors who are not performing their security duties, the only means for DIS to force corrective action would be to terminate the agreement, thereby causing the company to lose its clearance, and halting all the company’s work on classified contracts. Our recommendation is a more moderate way of removing a nonperforming trustee than revoking a company’s clearance and terminating its classified contracts. We modified our recommendation in recognition of DOD’s comment that the shareholder must remove a trustee director. James F. Wiggins Davi M. D’Agostino Peter J. Berry John W. Yaglenski Eric L. Hallberg Deena M. El-Attar Cornelius P. Williams Robert R. Tomcho The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed security arrangements used to protect sensitive information from foreign-owned U.S. defense contractors that perform on classified Department of Defense (DOD) contracts. GAO found that: (1) security arrangements are intended to protect foreign-owned U.S. defense contractors from undue foreign control and to prevent foreign owners' access to classified information; (2) there are 54 foreign-owned U.S. defense contractors operating under security arrangements, such as voting trusts, proxy agreements, and special security agreements; (3) although special security agreement companies are not permitted access to the highest levels of classified information due to the risk of foreign control, DOD authorized such access to 12 of 33 special security agreement companies; (4) each foreign-owned U.S. defense contractor must have a visitation agreement with its parent company to protect against foreign owners' unauthorized access to classified information; (5) individuals contacted by the parent company are required to report on the technical discussions that took place under visitation agreements; (6) U.S. citizens are selected for the boards of directors of foreign-owned U.S. defense firms to protect against undue foreign control and unauthorized access to classified information; and (7) most trustees have limited oversight roles and do not actively check on the implementation of security policies or engage in management issues, and some appear to have conflicts of interest. |
Charities are organizations established to address the needs of the poor or distressed and other social welfare issues. Federal, state, and private agencies and the American public monitor how well charities are meeting these needs. Although not all charities have a disaster relief focus, historically charities have adapted their work as needed to the immediate or longer-term needs of disaster survivors. In these disaster aid efforts, charities may cooperate with FEMA. Though charities and FEMA have a substantial role in providing disaster aid, people affected by disasters may also pursue other government or private sources of relief. Charities represent a substantial presence in American society. Internal Revenue Code Section 501(c) establishes 27 categories of tax-exempt organizations; the largest number of such organizations falls under Section 501(c)(3), which recognizes charitable organizations, among others. The term charitable, as defined in the regulations that implement Section 501(c)(3), includes assisting the poor, the distressed, or the underprivileged; advancing religion, education, or science; erecting or maintaining public buildings, monuments, or works; eliminating prejudice and discrimination; defending human and civil rights; or combating community deterioration and juvenile delinquency. An organization must apply for IRS recognition as a tax-exempt charity that strives to meet one or more of these purposes. In general, a charity serves these broad public purposes, rather than specific private interests. By 2000, IRS had recognized 1.35 million tax-exempt organizations under Section 501(c), of which 820,000 (60 percent) were charities. At the end of 1999, the assets of Section 501(c)(3) organizations approached $1.2 trillion and their annual revenues approached $720 billion. Charities pay no income taxes on contributions received, but they can be taxed on income generated from unrelated business activities. Federal agencies, state charity officials, other nonprofit organizations, and the general public may all participate in overseeing charitable operations to protect the public interest. At the federal level, IRS has primary responsibility for recognizing tax-exempt status and determining compliance with tax laws, such as those governing the use of charitable funds. Notwithstanding these powers, IRS is not generally responsible for overseeing how well a charity spends its funds or meets its charitable purpose. Despite the federal government’s significant indirect subsidy of charities through their tax-exempt status and the allowance of charitable deductions by individuals, the federal government has a fairly limited role in monitoring charities, with states providing the primary oversight of charities through their attorneys general and/or charity offices. These officials maintain registries of charities and professional fundraisers, including financial reports of registrants. They also monitor the solicitation and administration of charitable assets. Attorneys general and state charity officials have extensive power to investigate charities’ compliance with state law and can correct noncompliance through the courts. Although local law enforcement agents, such as district attorneys, may assist the state with investigations of charities, they tend to focus on the prosecution of the criminal cases of individuals who defraud charities. Further oversight of charities’ efficiency and effectiveness is likely to be carried out by the private sector, including “charity watchdogs,” and the American public. Watchdogs such as the Better Business Bureau’s Wise Giving Alliance and publications such as The Chronicle of Philanthropy are the public’s primary sources for information on charitable organizations and fund-raising. The questions and concerns people bring to the attention of watchdogs and government officials are often the key motivators for initiating investigations. Charities have historically played a role in the nation’s response to disasters. First, some charities, for example, the American Red Cross or the Salvation Army, are equipped to arrive at a disaster scene and provide mass care, including food, shelter, and clothing, and in some circumstances, emergency financial assistance to affected persons. Next, depending on the extent and nature of devastation to a community and charities’ typical services and capacities, some charities are best structured to provide longer-term assistance, such as job training or mental health counseling. Finally, new charities may form post-disaster to address the needs of all survivors or specific population groups. For example, after the September 11 attacks, charities were established to serve survivors of restaurant workers and firefighters. FEMA is the lead federal agency for responding to disasters and may link with charitable organizations to provide assistance. According to FEMA regulations, in the event of a presidentially declared disaster or emergency, such as September 11, FEMA is required to coordinate relief and assistance activities of federal, state, and local governments; the American Red Cross; the Salvation Army; and the Mennonite Disaster Service; as well as other voluntary relief organizations that agree to operate under FEMA’s direction. Although charities are expected to be among the first agencies to provide assistance to those affected, in the event of some natural disasters, FEMA may anticipate need and be the first to respond. FEMA can provide a range of assistance to individual disaster survivors. In a natural disaster, such as a hurricane or flood, the bulk of FEMA’s individual assistance program money tends to be given to individuals whose residences have been damaged. September 11 presented a different challenge for the agency: few people had damage to their homes, but many needed unemployment assistance and help paying their mortgage or rent. Though FEMA and charities provide key resources to survivors of disasters, a range of additional aid may be available for those affected by the September 11 attacks. Federal sources of aid to individuals include Social Security, Medicaid, Disaster Unemployment, and Department of Justice benefits for fallen police officers and firefighters. In addition, the Congress has set up a Federal Victim Compensation Fund for individuals injured and families of those who died in these attacks. See appendix I for more information about this fund. From states, survivors may obtain State Crime Victim Compensation Board funds, unemployment insurance, or workers’ compensation. Some families may also be able to access private insurance or employer pensions. While it may be difficult to tally precisely the total amount of funds collected, 35 of the larger charities have raised almost $2.7 billion and distributed about 70 percent of the money. Distribution rates vary widely among the charities, in part, because some were established to provide immediate assistance while others were established to provide assistance over the longer term. Charities used the money they collected to provide cash and a broad range of services to people directly and indirectly affected, although questions about how best to use the funds as well as service delivery difficulties complicated charities’ responses. Thirty-five of the larger charities have raised almost $2.7 billion as of October 31, 2002, to aid the survivors of the terrorist attacks. (See table 1.) These include a range of organizations, including large, well-established organizations such as the American Red Cross and the Salvation Army and other organizations created specifically in response to September 11, such as the Twin Towers Fund. While the total amount raised may increase over time, many organizations are no longer actively collecting funds. For example, The September 11th Fund stopped soliciting donations in November 2001 and in January 2002 asked the public to stop sending contributions to the fund. The large number of charities collecting funds for September 11 complicates the efforts to determine a precise count of the total funds raised. The Metro New York Better Business Bureau Foundation has identified 470 September 11-related charities, and the IRS estimates that about 600 charities are involved in September 11-related fundraising. The IRS took steps to quickly grant tax-exempt status to about half of these 600 charities after September 11. While some of these new charities appear to be smaller, specific fundraising events such as the Hike of Hope, others like the Twin Towers Fund, which raised $205 million, became major charities. While any one charity will have information on its funding and services, the charitable sector as a whole generally does not have reporting mechanisms in place to track funds across entities or for any one event. Some tracking efforts are under way, however. For example, the Metro New York Better Business Bureau Foundation recently surveyed the 470 September 11-related charities they identified and 270 responded to its request for fund information. Further complicating a precise tally of funds is the interfund transfers that occurred among charities. For example, the Americares Foundation raised $5.3 million in its Heroes Fund and transferred it to the Twin Towers Fund to be distributed. Likewise, the United Jewish Federation of New York distributed $5.4 million in grants it received from the New York Times 9/11 Neediest Fund and the United Jewish Communities of North America. The Metro New York Better Business Bureau Foundation estimates that more than $400 million of the charitable aid it is tracking represents duplicate listings of money raised by grant-making organizations and the direct service providers they are funding. Moreover, an unknown number of corporations have sold and are still selling products for which some portions of the proceeds are to be donated to September 11 charities, a practice known as “cause-related marketing.” Some reports cite hundreds of products being sold in the name of September 11 charities; the extent to which these funds have already been forwarded to charities is not known. A more complete accounting of the number of September 11 charities and the amounts they raised might be possible when all charitable organizations have filed with IRS the required annual information form, called the IRS 990. Among other items, these tax-exempt 501(c)(3) organizations must report on their total revenues (including donations), expenses, grants and allocations, and the total dollars of specific assistance they provided to individuals. This form is due in the fifth month after the close of the organization’s taxable year. As IRS 990 forms for these charities become available, examination of them may yield more information; however, the way these data are reported may not necessarily allow a precise accounting of dollars raised for September 11. For example, pre-existing charities that served other purposes as well as September 11, may not report funding data at the level of detail that would link spending to September 11 purposes. Of the almost $2.7 billion estimated collected by the larger 35 charities, about $1.8 billion, or 70 percent, has been reported distributed as of October 31, 2002. Fund distribution rates, however, vary widely from less than 1 percent to 100 percent, in part because of the differing goals and purposes of the charities. For example, some charities with high distribution rates like the New York Times 9/11 Neediest Fund or the United Way of the National Capital Area are primarily fundraisers that make grants to direct service providers such as the Children’s Aid Society and the Salvation Army, which provide immediate assistance to survivors. Other charities, particularly those that will be providing scholarship assistance to survivors like the Citizens’ Scholarship Foundation, the Navy-Marine Corps Relief Society, the Army Emergency Relief, and Windows of Hope, have much lower distribution rates that reflect the longer-term missions of their charities. Figure 1 shows the amount of aid raised and distributed by charities. See appendix II for the amount of funds raised, distributed, and distribution rates for each of the 35 charities. Charities provided a wide range of assistance to the different categories of individuals affected, including the families of those killed, those indirectly affected through the loss of a job or displacement from their home, and services provided to the rescue workers and volunteers, as shown in table 2. A full accounting of the range of services provided is difficult to ascertain, as many large funders have provided grants to multiple service providers. For example, The September 11th Fund has provided grants to over 100 organizations, including direct service providers like Safe Horizon, which provide assistance to families and communities and to rescue and recovery efforts. Families of those killed on September 11 have received cash gifts from various charities to help them through the first year of the recovery process. McKinsey’s survey of nonuniformed World Trade Center families showed that 98 percent of families reported receiving cash assistance averaging $90,000 per family. Because of the charities specifically established to assist the survivors of the firefighters and police killed in the attacks, their survivors will receive more cash assistance than survivors of the nonuniformed people killed. A Ford Foundation study reports that uniformed rescue workers funds have provided families of the Port Authority Police and NYC Police and Firefighters with cash benefits of $715,000, $905,000, and $938,000, respectively. It was a change in IRS rules and subsequent legislation that enhanced the abilities of charities to distribute aid on a per capita basis—as did some of the charities focused on those firefighters and police killed—rather than on the basis of more in-depth needs assessment. IRS rules governing the uses of charitable aid were changed for September 11 survivors. Recognizing the unique circumstances caused by this tragedy and in anticipation of congressional legislation that was subsequently passed, IRS relaxed the burden on charities—in the case of this disaster only—to show that the assistance provided was based on need. In November 2001, IRS issued guidance that authorized charities to make payments to September 11 victims and their families without a specific needs test, if made in good faith and using objective standards. Some charities and oversight agencies we spoke with said that this placed some charities under pressure to more quickly distribute their funds. It allowed others, such as the International Association of Fire Fighters, to distribute funds on a per capita basis, regardless of need, to the surviving families of those who perished, a practice that had not been permitted prior to the September 11 disasters. Questions about how aid should be distributed as well as problems identifying and serving thousands of people directly and indirectly affected complicated charities’ tasks as they moved to aid those affected by the attacks. Charities faced considerable debate on how their funds should be distributed—to whom, for what, and when? Some victims’ groups and charities believe the money should be in the form of cash grants, distributed as quickly as possible, and typically focused on families of those killed, believing that the survivors are in the best position to understand and deal with their individual needs. Other charities and oversight organizations believe that needs are best met when the charitable funds take into account a broad range of needs, including those in the long term, and focus on services rather than cash grants. For example, Oklahoma City charities emphasized needed services rather than cash grants. While most of the September 11 funds have been distributed in the first year, some charities are planning to provide services over the longer term. The American Red Cross announced that it is setting aside $133 million to be spent over the next 3 to 5 years primarily in the areas of mental health and uncovered health care costs. The September 11th Fund announced that it will use its remaining $170 million over the next 5 years to also fund services such as mental health counseling, employment assistance, health care, and legal and financial advice. In addition, the Survivors’ Fund, the largest fund set up exclusively to support the needs of survivors of the Pentagon attack, is focusing its services on the long-term needs of the survivors. Since the attacks, decisions made by the American Red Cross—by far the largest holder of funds for September 11 purposes—were the focus of much media and congressional scrutiny, raising concerns about its plans for funds raised. By the middle of November 2001, contributions to the American Red Cross’s Liberty Fund reached nearly $543 million. The American Red Cross had established the Liberty Fund to help people affected by the September 11 attacks, its aftermath, and other terrorist events that could occur in the near future. While American Red Cross officials said that from early on it used its traditional language in its fund appeals saying that funds raised would be used for “this and other disasters,” it was widely perceived as a violation of the donors’ intent in this case. In response to concerns about the organization’s use of funds, on November 14, 2001, the American Red Cross pledged that the entire Liberty Fund would be spent to care for those directly and indirectly affected by the September 11 attacks, their families, and the rescue workers. Fulfillment of donor intent is an important issue, and many charities we spoke with said that they were keeping their spending within the framework of what they believed donors wanted: to quickly meet the needs of those for whom aid is intended. Representatives from philanthropic oversight organizations said charities in general could have minimized some of the problems they faced by paying more attention to the public relations aspects of their work. This might have reduced adverse publicity when people expected one thing and charities did another. Problems these representatives cited include the following: Some charities made vague appeals for money, and the public didn’t understand what programs these funds might support. Victims and the needs of the survivors were too narrowly defined. Some charities communicated a simplistic definition of those needing help as only the survivors of those people who were killed or those who were injured in the terrorist attacks. However, in the September 11 disasters, thousands of others were displaced from their homes, lost their jobs, and needed counseling to cope with post-traumatic stress disorder. Some charities implied that all of the funds collected would go to direct assistance without any management and administrative cost. This created a misperception that services could be delivered without trained professionals, administrative back offices, support staff, or personnel to help ensure accountability in the use of the donated funds. Charities told us that they had to make extensive efforts to identify the people who were killed and locate their survivors, as there were no uniform lists, and privacy issues affected the sharing of information. For example, when the Robin Hood Foundation wanted to provide $10,000 cash gifts to the surviving families, it found it had to develop its own list of the people who were killed and contact information for their survivors. The foundation recruited volunteers to contact World Trade Center employers and reported having to sign 55 different confidentiality agreements with companies, airlines, and individuals, to ensure that Robin Hood Foundation would not share its list with other agencies. In the case of those killed and injured at the Pentagon, confidentiality was a concern as well. The Pentagon provided the Foundation with a list of beneficiary names for the checks but sent a representative to New York to put the checks in the envelopes and apply the address labels. Charities made many efforts to reach out to hard-to-serve clients, non- English speakers, and immigrants. For example, the New York Immigration Coalition received $800,000 from The September 11th Fund and money in other grants to provide legal assistance, establish immigrant help desks at each disaster center, and train charity workers on how to better reach immigrants. The NYC Department of Health reported that 20 percent of those killed in NYC were foreign-born, coming from 167 different countries. Charity officials said the Immigration and Naturalization Service facilitated their efforts to reach immigrants by announcing it would not pursue information on the immigration status of individuals. Also, some charities such as Windows of Hope were created to specifically serve low-income restaurant workers with language barriers. In spite of outreach efforts, representatives from the victims groups we spoke with said that survivors were not aware of all charitable services and assistance available. A recent study of dislocated hospitality-industry workers in the Washington, D.C., region also reported that despite the efforts to meet the needs of these workers, many still struggled to connect with services. Workers interviewed for the study said a single source of information and referrals for emergency assistance, job placement assistance, or job training would have been helpful. In addition, some people we spoke with in NYC expressed concern that many indirectly affected survivors did not qualify for assistance because they lived outside the geographic area below Canal Street in Manhattan, which was initially targeted for aid by FEMA and many charities. After much public concern about the limited geographic range of FEMA’s eligibility regulations, in August 2002, the Congress mandated FEMA to expand its mortgage and rental assistance to employees working anywhere in Manhattan and to those who could track job loss or loss of income to September 11. FEMA also provides this assistance to those workers whose employers are not located in Manhattan, but who are economically dependent on a Manhattan firm, and anyone living in Manhattan, who commuted in and out of the island and who suffered financially because of post-September 11 disruptions. Charities and government oversight agencies have taken a number of steps to prevent fraud, and relatively few cases have been uncovered so far. For example, to minimize fraud by individuals, some charities required applicants to provide documentation certifying their needs and the relationship of their need to the disaster. Also, some charities conducted independent reviews of their applications and eligibility processes. State attorneys general and local district attorneys told us that although they had limited resources to dedicate to such efforts, they are actively responding to public concerns about charities. Officials from these government oversight agencies pursued investigation of fraud by individuals and charities; most of the few cases of fraud being prosecuted or investigated in New York relate to individuals who are charged with or have been convicted of falsely obtaining assistance. Different types of fraud can occur in the solicitation and delivery of charitable funds: fraud by individuals, charities, and businesses, as shown in table 3. Charity and oversight agency officials told us that they employed a number of methods to prevent this fraud, as also shown in table 3. Most charities we spoke with required applicants to provide documentation certifying identity, injury, death of a family member, or loss of job or home, and may have asked for proof of financial need, for example, paycheck stubs. To verify that they were adequately screening for fraud, some charities conducted independent reviews of their eligibility processes. State charity officials and local district attorneys typically relied heavily on complaints from the public and on the charities themselves to identify ineligible individuals or fraudulent charitable groups or solicitations. These officials also reached out to a number of professional groups, including presentations to fund-raising associations and charity boards about state guidelines on charitable solicitation. Finally, they also issued educational press releases, suggesting that people should examine charities before they write checks. See appendix III for contact information for each state’s charity oversight agency. Charities, state attorneys generals, and local district attorneys we spoke with said that they have found relatively few cases of fraud by charities or individuals. Charities like Safe Horizon told us that they were developing relationships with local law enforcement and had referred a number of suspicious cases to the police department. Furthermore, charities’ internal audits identified additional potential cases of fraud. For example, the American Red Cross’s review identified 350 suspected cases of fraudulent claims on its Liberty Fund, representing less than 1 percent of distributed funds. State and local oversight officials told us that although they did not have additional resources available to address September 11-related fraud, they are actively pursuing any fraud identified. They reported that since September 11, they had found relatively few cases of fraud, either by charities or individuals. These attorneys general and state charity officials from the seven states that suffered high numbers of casualties from September 11 told us they are investigating a combined total of 17 suspected cases of fraudulent solicitation of funds. Local officials indicate that they have more reports of individual fraud than charity fraud. For example, the New York County District Attorney’s Office reported that as of October 15, 2002, it had arrested 84 people for individual fraud and 2 people for fraudulent solicitation of funds. Representatives of this district attorney estimated that about $1 million in aid has been fraudulently obtained. The following are examples of suspected individual fraud uncovered to date by the New York County District Attorney’s Office. One man staged his own death in the Trade Center, then, posing as his next of kin (a recently deceased brother), applied for and received over $272,000 from two charities. Another NYC man reported that his 13th child had accompanied him to a job interview at the World Trade Center and had perished in the attack. The investigation revealed that the child never existed, a fact confirmed by other family members. The man received $190,867 from two charities. A group of cafeteria employees in a building near the Trade Center were paid for 4 days of work when their building was closed post- disaster. One employee applied for disaster-related income replacement for those 4 days (even though he had been paid) and received funds. This employee told his co-workers about his success in obtaining charitable aid under pretense, and 23 of his colleagues attempted to do the same. A man hired 13 homeless people to help him defraud charities. He supplied the homeless people with fraudulent documentation of job loss and financial need, then drove them around to relief sites around the city, where they applied for and received a total of $108,905 from charities. In addition, the New York State Attorney General’s Office reported investigating approximately 20 additional cases of individual fraud, many of which are related to individuals who allegedly attempted to obtain false death certificates. While information is available on identified fraud cases, the total extent of fraud is not known and will be difficult for oversight agencies and charities to assess. First, detection of fraud by individuals could be challenging, despite checks being in place, as charities said they were overwhelmed by the volume of applications for assistance and had to hire new staff or volunteers to help them manage their relief efforts. The potential for fraud by individuals may have increased, as the new personnel may have been unfamiliar with the charities’ eligibility regulations and may have inappropriately distributed or denied funds. Second, fraud detection may be particularly problematic in areas such as cause-related marketing by businesses. For example, the executive director of the Twin Towers Fund told us he was unaware of a record company’s marketing campaign on the fund’s behalf, until he read about it in the newspaper. The charity had to contact the record company, then set up a contract to formalize the terms of the fundraising. Third, it may also be difficult to track fund-raising by groups using September 11 to solicit for other purposes. In one state, oversight officials told us that an organization conducted a telemarketing drive promising that funds would be given to “firefighters, like those who died September 11,” but no funds went to the survivors of firefighters who died in the attacks. Oversight agencies said that these types of organizations tended to move very quickly in and out of geographic areas, making it difficult to find and prosecute them. Despite some early cooperation attempts, survivors had difficulty accessing charitable aid. The unprecedented scope and complexity of the September 11 disasters presented a number of challenges to charities in their attempts to provide seamless social services for those in need of assistance. Some months after the disaster, however, oversight agencies and large funders worked to establish a more coordinated approach at the September 11 attack sites. This included the formation of coordinating entities, the implementation of case management systems, and attempts to implement key coordination tools, such as client databases. Following the disasters, charitable organizations and FEMA took some immediate steps to help survivors get assistance, including checking in with other agencies. Charities moved quickly to collect funds, give grants to service providers, and establish 800 numbers and Web sites with aid information. FEMA headquarters contacted charities likely to be active in disaster relief to discuss how FEMA contacts would be of assistance. Some efforts at formal coordination include Family Assistance Centers and Disaster Assistance Service Centers, where some of the larger charities and government agencies set up booths to provide assistance to survivors and those economically affected by the disaster. The United Way of the National Capital Area held information-sharing meetings for Washington and Virginia service providers and the New York Community Trust did so as well. Despite these efforts, September 11 survivors generally believed that they had to navigate a maze of service providers in the early months, and both charities and those individuals who were more indirectly affected by the disaster (e.g., by job loss) were confused about what aid might be available. Survivors and charities told us that aid distribution was hindered by a number of factors. First, those seeking aid had to fill out a separate application and provide a unique set of documentation for each charity to which they applied. Second, in the early stages post-disaster, all survivors had to apply in person for charitable assistance, even if they had previously obtained aid from the organization. This became troublesome for the many survivors who did not live in metropolitan New York or Washington. Charities like Pennsylvania September 11th Assistance ended up paying for survivors’ travel to the Liberty Park Family Assistance Center in New Jersey. Third, over the course of the first few weeks after the disaster, many dimensions of coordination were limited by little information sharing between organizations helping survivors. For example, some charities said that they were not familiar with other organization’s rules, especially FEMA’s. Furthermore, because of privacy laws, charities and FEMA did not share information about clients with each other; as a result, in early stages of service delivery, charities might have duplicated services to clients. Although ways to address some of these issues have been used in the past, the scope and complexity of the September 11 disasters presented a number of challenges to charities in their attempts to provide seamless social services for survivors of the disaster. In the aftermath of the Oklahoma City bombing, charities and service providers worked together to create a database of aid recipients, provide each recipient a case manager, and to participate in a long-term recovery committee to better coordinate aid, fostering a more integrated service delivery approach. The September 11 events differed in key ways that hindered a similar approach: A much larger and more diverse number of actual and potential aid recipients. The 168 Oklahoma City victims who were killed were a more homogeneous population of federal government workers, while the World Trade Center disaster alone had 2,795 victims from a number of businesses and 167 countries. In addition, thousands more than in Oklahoma City were indirectly affected through loss of their jobs and homes. Numerous governmental jurisdictions. The September 11 attacks occurred in three states, which involved multiple government entities at each site. Larger numbers and multiple layers of funders and grantees. In addition to existing charities that were already involved in disaster relief services, the hundreds of new charities that emerged to provide aid to families of those killed were involved. Some months after the disaster, oversight agencies and large funders worked to establish a more coordinated approach at the September 11 attack sites. This approach included the formation of coordinating entities, the implementation of case management systems, and attempts to implement key coordination tools. Several coordination efforts emerged at the disaster sites. In NYC, the State Attorney General had encouraged charities to work together to ease access to aid, including use of a common application form and database. The 9/11 United Services Group (USG), a consortium of human service organizations and their affiliated service coordinators, was formed in December 2001 to foster a more coordinated approach to aid delivery. (See appendix V for a list of USG organizations participating in USG service coordination.) Furthermore, in the spring of 2002, FEMA successfully established long-term recovery committees in New York and New Jersey for charities that had smaller September 11 funds than those of USG. In Virginia, the Survivors’ Fund set up a board to assess the unmet needs of survivors and persons who were economically displaced by the disasters. Members of this board include key area agencies, such as the United Way and FEMA, which have historically facilitated coordination in areas affected by disasters. As coordination efforts progressed, some charities continued to follow Oklahoma’s model by establishing case managers for individuals who lost family members in the attacks. Although all the charities were familiar with a case management model, cross-agency case management presented challenges, as agencies’ mission statements or regulations specified different qualifications and specializations of their social workers (e.g., Master’s degree required). Despite these challenges, USG’s service coordinator program involves the efforts of a number of charities across the city. If families need help, they can call the Safe Horizon hotline, and an operator there assesses whether the clients have short- or long-term needs, his or her geographic area, and the clients denominational or ethnic preferences for service providers, and then connects them with a 9/11 USG service coordinator. Coordinators are current staff of local charities and have been trained by USG to help survivors identify and access a broad range of services. They have access to a number of technology tools, including an automated centralized directory of benefits and services available to families and a community website that allows service coordinators to communicate with the entire service coordinator community. Service coordinators, key charity managers, and the New York FEMA Voluntary Agency Liaison also meet weekly to discuss service provision issues. The Survivors’ Fund in Virginia also set up case managers but contracted with another agency to hire new social workers to provide case management services to the injured and families of those killed in the Pentagon attacks. Agencies began to develop client databases and a common application form for disaster relief aid. One key advantage of client databases is that the services clients had already received could be tracked by the charities, and as such, would prevent duplication of services. Although many charities expressed concern that their clients would lose their anonymity by signing a confidentiality waiver, the 9/11 USG has established a database of September 11 services for its service coordinators, and a number of their member organizations are creating and using a confidential client information database. The Survivors’ Fund and United Way of the National Capital Area have also created a client database, which is primarily being used by these two agencies. A common application form would improve the aid delivery process by reducing the amount of documentation and forms clients have to provide to each agency. The common application form is in progress in New York. The form has not been established yet, as charities that have trained volunteers nationwide indicated that at this time, they are not interested in retraining all their volunteers to a new application. Charities, government agencies, watchdog groups, and survivors’ organizations shared with us lessons that could improve the charitable aid process in disasters in the future. These lessons include easing access to aid, enhancing coordination among charities and between charities and FEMA, increasing attention to public education, and planning for future events. Some efforts are under way to address these issues. However, the independence of charitable organizations, while one of their key strengths, will make implementation of these lessons learned dependent on close collaboration and agreement among these independent organizations. Charities, government agencies, watchdog groups, and survivors’ organizations shared with us the lessons they learned from the September 11 charitable aid process that could be incorporated into the nation’s strategies for responding to large-scale disasters in the future. Easing access to aid for those eligible—Helping individuals in need find out what assistance is available, and easing their access to that assistance could be facilitated if a central, accessible source of public and private assistance is made available to survivors. Access to assistance could be further facilitated if charities adopted a simplified, one-stop application process and a standard waiver of confidentiality that would allow survivors to get access to multiple charities and allow charities to share information on individuals served and avoid duplicative services. While the focus of such an effort would be to facilitate services to those in need, a one-step application process could include a set of basic interview questions or steps designed to prevent fraud. Another way to facilitate eligible survivors receiving assistance is by offering each survivor a case manager, as was done in NYC and in Washington. Case managers can help to identify gaps in service and provide assistance over the long term. Enhancing coordination among charities and between charities and FEMA—Private and public agencies could better assist those in need of aid by coordinating, collaborating, sharing information with each other, and understanding each other’s roles and responsibilities. This requires effective working relationships with frequent contacts. Collaborative working relationships are essential building blocks of strategies that ease access to aid, such as a streamlined application process or the establishment of a database of families of those killed and injured to help charities identify service gaps and further collaboration. Increasing attention to public education—Charities’ increased attention to public education could better inform the donor public on how their money will be spent and the role of charities in disasters. Controversies over donor intent could be minimized if charities took steps when collecting funds to more clearly specify the purposes of the funds raised, the different categories of people they plan to assist, the services they plan to provide, and how long that assistance will be provided, as that information becomes known. Charities can further ensure accountability by more fully informing the public about how their contributions are being used and providing comprehensive information on facets of their operation to the public. The September 11th Fund’s and the Robin Hood Foundation’s Web sites, for example, list updated information on grants, recipients, amounts, and purposes. Moreover, efforts such as those of the Metro New York Better Business Bureau to compile information across multiple organizations can help provide accountability for how funds are used. For future events, the Ford Foundation report on the philanthropic response to September 11 suggested that “the major philanthropies should consider designating a well-respected public figure who would provide daily media briefings on their responses.” Planning for future events—Planning for the role of charities in future disasters could aid the recovery process for individuals and communities. While disasters, victims, and survivors can vary widely, it could be useful for charities to develop an assistance plan to inform the public and guide the charities’ fundraising efforts. In addition, state and local efforts related to emergency preparedness could explicitly address the role of charities and charitable aid in future events. Future plans could also address accountability issues, including training for charitable aid workers and law enforcement officials about identifying potential fraud and handling referrals for investigations. While some of the lessons learned can be implemented at the individual charity level, most require a more collaborative response among charities, and some steps are under way to build collaborative responses. Key efforts include the following: The National Voluntary Organizations Active in Disaster—This organization has 34 national member organizations, such as the American Red Cross, The Salvation Army, and Catholic Charities USA, 52 state and territorial organizations, and some local organizations. Established in 1970, its goal is to promote collaboration, while encouraging agencies to respond independently but cooperatively in disasters. Since September 11, 2001, this organization has initiated information sharing meetings in NYC and Washington, D.C., and has discussed lessons learned at its annual meeting in March 2002. See appendix IV for a list of its members. As part of its mission, the 9/11 United Services Group is planning to develop a blueprint for the coordinated delivery of social services and financial aid in future emergencies. Later this year, FEMA is facilitating a meeting between a committee of the National Voluntary Organizations Active in Disaster and the 9/11 United Services Group. While some charitable organizations are taking steps to incorporate lessons learned, they face significant challenges. By its inherent nature, the charitable sector is comprised of independent entities responsive to clients and donors; it is not under the direction of a unifying authority. While in situations such as September 11 FEMA is required to coordinate activities of certain charitable organizations, as well as others that agree to such an arrangement, FEMA officials said that in exercising this authority for September 11 and other events, they work closely with charities as a facilitator, not as a leader or director. FEMA officials noted it is important to build and maintain trust with the charitable organizations and to be careful to give local leadership the opportunity to lead in disasters. An externally imposed effort to direct or manage charities, whether by FEMA or another entity, could have deleterious effects; a key strength of charities is their ability to react flexibly and independently in the event of disasters. Overall, charitable aid made a major contribution in the nation’s response to the September 11 attacks. Given the massive scale and unprecedented nature of the attacks, the charities responded under very difficult circumstances. Through the work of these charities, millions of people have been able to contribute to the recovery effort and provide assistance to those directly and indirectly affected by the attacks. While much has been accomplished by charities in this disaster, lessons or strategies have also been identified related to improving access to aid, enhancing coordination among charities and between charities and FEMA, increasing attention to public education, and planning for future events that could improve future responses in disasters. There are no easy answers as to how to incorporate strategies that may result in a more accessible and transparent service delivery system into any future disasters. Coordination and collaboration among charitable organizations are clearly essential elements of these strategies, and some organizations have taken steps in this direction. At this point in time, an appropriate role for the federal government is to facilitate these efforts through FEMA, the federal agency that already has relationships with many of the key organizations involved in disaster response. This will help to ensure that lessons learned from the September 11 attacks and their aftermath can be incorporated into the nation’s strategies for dealing with large-scale disasters like this in the future. At the same time, it will help to ensure that charities may remain independent and vital in their programs and priorities. We are recommending that the director of FEMA convene a working group of involved parties to take steps to implement strategies for future disasters, building upon the lessons identified in this report and by others to help create sustained efforts to address these issues. The working group should address these and other issues as deemed relevant: (1) the development and adoption of a common application form and confidentiality agreement; (2) the establishment of databases for those receiving aid in particular disasters; and (3) strategies for enhancing public education regarding charitable giving in general and for large-scale disasters in particular, including ways to enhance reporting on funds collected and expended. This working group could include FEMA, representatives of key charitable and voluntary organizations and foundations; public and private philanthropic oversight groups and agencies; and federal, state, and local emergency preparedness officials. In commenting on a draft of this report, FEMA said that the recommendation is a practical one that is likely to foster enhanced communication and coordination among charitable organizations, foundation leaders, and government emergency managers. While FEMA acknowledged the challenges of working with a number of independent entities, it added that a working group of involved parties, along with skillful leadership and active participation among members, is likely to lead to important improvements in coordination and ultimately better service to those affected by disasters. In addition, FEMA noted that a component of the existing National Voluntary Organizations Active in Disaster may serve as the basis upon which to build. FEMA’s full comments are presented in appendix VI. We also shared a draft of the report with the American Red Cross, the Salvation Army, The September 11th Fund, the 9/11 United Services Group, an official of the National Voluntary Organizations Active in Disaster, and officials in the New York State Attorney General’s Office in New York City and obtained their oral comments. They said the report was fair and balanced and provided technical comments which we included where appropriate. Regarding the recommendation, the American Red Cross expressed some concern over whether FEMA was the right party to convene the working group, stating that the group’s goals would be outside of FEMA’s mission and that it would, therefore, be inappropriate to ask that FEMA be responsible for ensuring the success of the work group. The American Red Cross also said that the goals of the work group would more properly fall under the purview of the nonprofit sector and that work has already started on some of these areas. In responding to this concern, we emphasize that our recommendation charges FEMA with convening a working group of involved parties but does not specify that FEMA play the leadership role or be charged with management or oversight of the group’s progress. We agree that the key to the success of a working group in this area will depend on the actions of the charitable and voluntary organizations involved. We also acknowledge that some efforts are under way, including among the American Red Cross, Salvation Army, and the United Way, to address some of these issues. However, we continue to think that it is appropriate for FEMA to play a role in initiating meetings that will bring together involved parties. This will help to ensure that sustained attention is paid to these important issues and potentially result in improving the nation’s response to those in need in any future disasters. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of the report to other interested parties. We will also make copies available upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-7215 or Gale C. Harris, Assistant Director, at (202) 512-7235. Kevin Kumanga and Emily Leventhal also made key contributions to this report. Who is eligible: Any individual who was physically injured or the families and beneficiaries of any individual who was killed as a result of the terrorist-related aircraft crashes of September 11, 2001. Payments: The average award under the September 11th Victim Compensation Fund of 2001—before the statutorily required collateral offsets—is projected to be more than $1.8 million per claimant. Although it is difficult to determine the amount of collateral sources (e.g., life insurance) each claimant will have, the Special Master who oversees the fund believes the average payout after collateral sources will be approximately $1.5 million per claimant. Charitable aid received by families is not taken into account in determining award amounts. Total estimated expenditures: Over $5 billion. Applications: Filing deadline is December 2003. All information as of October 31, 2002, unless noted in one of the table notes. The National Voluntary Organizations Active in Disaster has 34 national member organizations as well as 52 state and territorial Voluntary Organizations Active in Disaster. | Surveys suggest that as many as two-thirds of American households have donated money to charitable organizations to aid in the response to the September 11 disasters. To provide the public with information on the role of charitable aid in assisting those affected by the attacks, GAO was asked to report on the amount of donations charities raised and distributed, the accountability measures in place to prevent fraud by organizations and individuals, and lessons learned about how to best distribute charitable aid in similar situations. Although it may be difficult to precisely tally the total amount of funds raised in response to the September 11 attacks, 35 of the larger charities have reported raising an estimated $2.7 billion since September 11, 2001. About 70 percent of the money that has been collected by these 35 charities has been reported distributed to survivors or spent on disaster relief since September 11, 2001. Charities used the money they collected to provide direct cash assistance and a wide range of services to families of those killed, those more indirectly affected through loss of their job or residence, and to disaster relief workers. Some of the charities plan to use funds to provide services over the longer term, such as for scholarships, mental health counseling, and employment assistance. Charities and government oversight agencies have taken a number of steps to prevent fraud by individuals or organizations, and relatively few cases have been uncovered so far. However, the total extent of fraud is not known and will be difficult to assess particularly in situations when organizations solicit funds on behalf of September 11 but use the funds for other purposes. Overall, charitable aid made a major contribution in the nation's response to the September 11 attacks, despite very difficult circumstances. Through the work of charities, millions of people contributed to the recovery effort. At the same time, lessons have been learned that could improve future charitable responses in disasters, including easing access to aid, enhancing coordination among charities and between charities and the Federal Emergency Management Agency (FEMA), increasing attention to public education, and planning for future events. FEMA and some charitable organizations have taken some steps to address these issues. However, the independence of charitable organizations, while one of their key strengths, will make the implementation of these lessons dependent on close collaboration and agreement among charities involved in aiding in disasters. |
VA manages a large health system for veterans, providing health care services to over 5 million beneficiaries. The cost of these services in fiscal year 2005 was over $30 billion. According to VA, its health care system now includes 157 medical centers, 862 ambulatory care and community-based outpatient clinics, and 134 nursing homes. VA health care facilities provide a broad spectrum of medical, surgical, and rehabilitative care. The management of VA’s facilities is decentralized to 21 regional networks referred to as Veterans Integrated Service Networks (networks). The Charleston facility is part of Network 7, or the Southeast Network, and the Denver facility is located in Network 19, or the Rocky Mountain Network. To meet its mission of serving the needs of the nation’s veterans, VA partners with medical universities and DOD. In 1946, VA established a program to enter into partnerships with medical universities, now referred to as academic affiliations, to provide high quality health care to America’s veterans and to train new health professionals. Today, VA maintains affiliations with 107 of the nation’s 126 medical schools. In addition to academic affiliation agreements, VA purchases clinical services from medical schools for the treatment of veterans. Similarly, in 1982, the VA and DOD Health Resources Sharing and Emergency Operations Act (Sharing Act) was enacted to provide for more efficient use of medical resources through greater interagency sharing and coordination. For example, the VA Medical Center in Louisville and the Ireland Army Community Hospital in Fort Knox, Kentucky, have engaged in sharing activities to provide services to beneficiaries that include primary care, acute care pharmacy, ambulatory, blood bank, intensive care, pathology and laboratory, audiology, podiatry, urology, internal medicine, and ophthalmology. Given the importance of these partnerships to VA’s ability to meet its mission, VA’s 2003-2008 strategic plan includes goals for sustaining partnerships with medical universities and sharing resources with DOD. VA’s Denver and Charleston medical facilities have long-standing affiliations with local medical universities. VA’s facility in Denver is affiliated with the University of Colorado’s School of Medicine—through the University of Colorado at Denver and Health Sciences Center and UCH—and VA’s facility in Charleston is affiliated with MUSC. These affiliations provide both VA facilities with the majority of VA’s medical residents who rotate through all VA clinical service areas. Both VA facilities also purchase a significant amount of medical services from their affiliates. Specifically, the Denver facility annually obtains $9 million worth of services from UCH, and the Charleston VA facility buys $13 million in services annually from MUSC. The medical services purchased are in such areas as gastroenterology, infectious disease, internal medicine, neurosurgery, anesthesia, pulmonary, and cardiovascular perfusion. In addition to these services, VA also has a medical research partnership with MUSC for a mutually supported biomedical research facility, the Thurmond Biomedical Research Center. Table 1 provides more detailed information about the VA facilities in Charleston and Denver and their medical affiliates. VA evaluated the joint venture proposals for its facilities in Denver and Charleston on an ad hoc basis because it lacks criteria at the departmental level to evaluate such proposals consistently. VA has decided against a joint facility in Denver, but it is still in the process of considering such a facility in Charleston. In both locations, multiple iterations of the joint venture proposals have been considered, and negotiations between VA and its medical affiliates have stretched over a number of years. Negotiations in both locations were hampered by limited communication and collaboration, a lack of top VA leadership support for the proposals, and no single VA point of contact for the medical affiliates. VA does not have criteria at the departmental level that could be used to evaluate joint venture proposals on a consistent basis. Consequently, VA officials identified factors for considering the specific joint venture proposals in Charleston and Denver. Some of the identified factors were consistent between the two locations, and others were site-specific, but it is not clear how any of the factors weighed in VA’s consideration of the proposals. In studies and correspondence regarding the joint venture proposal in Denver, VA officials identified several factors that they believed to be important in considering the joint venture proposal. In particular, in correspondence between VA and UCH in 2002, and again in 2004, the Secretary of VA identified four major considerations—(1) maintaining VA’s identity; (2) maintaining VA’s governance; (3) balancing and evaluating priorities within VA’s capital asset program, including the CARES process; and (4) securing funding. In 2002, a VA taskforce composed of headquarters, network, and facility officials examined the potential for a VA-UCH joint facility and identified additional factors critical to the decision-making process. These factors included maintaining VA’s commitment to providing health care to meet veterans’ unique needs and research programs, VA’s aging infrastructure, and the gap between health care demand and capacity and funding. Another consideration that arose through the course of negotiations was VA’s space requirements for a new facility and the associated acreage of land needed and available on the Fitzsimons campus. VA did not indicate how the factors identified in the studies or correspondence weighed in its decision making regarding the joint venture proposal. Similarly, the VA-MUSC steering group identified a set of criteria to help identify and analyze the joint venture proposal for Charleston. As shown in table 2, these criteria include enhanced quality and service and financial viability. The steering group’s report did not indicate how or why these criteria were chosen, provide an explanation of the individual criterion, or indicate the relative importance of the criteria. While the steering group used the criteria in identifying and evaluating options for further consideration, it is not clear how, if at all, these criteria will be used by VA leadership in making a final decision on the joint venture proposal. In meetings with VA officials about the joint venture proposal in Charleston, officials identified other considerations that could influence the decision- making process, including the condition of the existing VA facility and the need to balance investment priorities across the region and nation. VA has decided against a fully-integrated facility with UCH in Denver. Negotiations between VA and the University of Colorado at Denver and Health Services Center and UCH stretched over a number of years, and a number of different options were considered. The lack of leadership buy-in and miscommunication about VA’s intentions regarding the future Denver facility prolonged negotiations and created an atmosphere of mistrust between the parties. Figure 1 provides a time line of key events in the negotiations between VA and UCH. In 1995, the University of Colorado decided to relocate its Health Sciences Center campus, including its affiliated UCH, from downtown Denver to the former Fitzsimons Army Medical Base located in nearby Aurora, Colorado, which was closed as part of DOD’s base realignment and closure process. UCH determined that its facility in downtown Denver lacked the space to accommodate its patient population and that there was little room for expansion. The availability of land at the Fitzsimons site offered an opportunity for UCH to move and expand the size of its campus. When Fitzsimons closed, DOD turned a portion of the 577 acres the base occupied over to the U.S. Department of Education so that it could convey land to public educational institutions. The University of Colorado applied for and received 227 acres from the U.S. Department of Education, and the University leased about 55 acres to UCH for its new inpatient and outpatient pavilions. The majority of the land at Fitzsimons—about 332 acres—was purchased by FRA for $1.85 million. FRA plans to develop a biomedical research park on this land. The remaining land at Fitzsimons is owned by the City of Aurora, a private entity, and a nonprofit organization. In late 1999, VA officials at the facility and network level and UCH officials began to informally discuss the possibility of relocating VA’s Denver medical center to the Fitzsimons campus. UCH and VA officials were concerned that UCH’s move to Fitzsimons, about 6 miles from its downtown Denver location, would strain their affiliation because of the amount of time it would take doctors to travel between the facilities. The UCH president also suggested that colocating the UCH and VA medical center at Fitzsimons could achieve cost efficiencies through integrating inpatient activities, such as medical and surgical specialty labs, and sharing some patient treatment. In considering a possible joint venture, facility and network VA officials worked with UCH officials to examine options for moving VA’s medical center to the Fitzsimons campus as well as sharing services and facilities with UCH. In particular, these officials jointly funded a study to determine the feasibility and cost of different options, including constructing free-standing facilities with limited sharing to jointly constructing and operating a new fully-integrated facility at Fitzsimons. The study, completed in 2001, concluded that a fully integrated, or joint, facility was the most cost-effective option. A second study commissioned by VA’s Network 19 in 2002 also analyzed a range of options, including a joint VA-UCH facility; but this study did not recommend which option to pursue. These studies were shared with VA’s central office, veteran service organizations, and the Congress, and became the basis of the joint venture proposal and negotiations. The Secretary of VA established a task force to examine the joint venture proposal to integrate the Denver medical center and UCH on the Fitzsimons campus in July 2002. The task force was composed of VA officials at the departmental, network, and facility levels. In September 2002, the task force issued a draft report, which examined seven alternatives—ranging from maintaining the status quo to constructing a fully integrated facility with UCH. The task force’s report presented advantages and disadvantages of each alternative. It did not recommend which alternative to pursue. In September 2002, the president of UCH sent a letter to the Secretary of VA asking that VA make a decision within 1 year regarding moving the VA facility to the Fitzsimons campus. In October 2002, the Secretary responded that VA could not commit to a joint UCH-VA hospital within that time frame. The Secretary indicated that a number of important questions remained unanswered, including how the joint hospital would be governed. Furthermore, he noted that the proposal to relocate the Denver medical center to Fitzsimons had to be evaluated in the context of the CARES Commission report, which was not scheduled to be completed until the following year. The Secretary’s response effectively ended discussions about constructing and operating a fully-integrated facility with UCH. In January 2003, VA began developing a proposal for a joint VA-DOD facility on the Fitzsimons campus. Specifically, the proposed joint federal facility would house VA and DOD, and the two entities would share some medical services and equipment. The joint VA-DOD facility, which was referred to as the federal tower, would be built on UCH-leased land at Fitzsimons and would be connected to UCH’s inpatient pavilion by a 2-story clinical facility. (See fig. 2.) The clinical facility would house operating rooms, imaging, and pathology laboratories, among other things, that would be shared by VA, UCH, and DOD. With this concept in hand, VA, UCH, and DOD began discussions about the availability of land adjacent to the UCH inpatient pavilion for the federal tower. In August 2004, the UCH president estimated that 18 acres of land was available adjacent to the UCH facility for the federal tower. However, soon thereafter, a survey of the land indicated that approximately 12 acres were available for the federal tower once easements and setbacks were taken into account. In December 2004, in a letter to UCH, the Secretary stated that the approximate 12 acres would be insufficient to meet VA’s space requirements for a new medical center. Specifically, the Secretary stated that predesign planning for the new facility revealed that VA needed approximately 1.46 million square feet to meet the specialized needs of veterans and DOD patients. To accommodate these space requirements, VA’s architectural firm outlined three design options—ranging from a 6- story VA hospital on 38 acres to a 8- to 10-story VA hospital on 20 acres. Based on this analysis and other considerations, the Secretary concluded that VA needed about 38 acres on the Fitzsimons campus for the joint VA- DOD facility. This decision ended negotiations over building the federal tower on UCH-leased land and connecting it to UCH’s inpatient pavilion with a clinical facility. UCH subsequently decided to use the land adjacent to the inpatient pavilion for other purposes. After land negotiations with UCH ended, VA officials began looking for a new location on the Fitzsimons site for a stand-alone VA medical center. The conference report accompanying VA’s appropriation act for fiscal year 2004 directed VA to continue efforts to “co-locate the Denver VA medical center with … at the Fitzsimons campus.”While there is no statutory requirement to locate the VA medical center at Fitzsimons, VA considers this language in the conference report to express the will of Congress and, as a result, has gone forward with efforts to purchase property from FRA for such a purpose. In July 2005, VA signed a memorandum of understanding with FRA to set forth the conditions under which VA and FRA will proceed with discussions that may lead to the purchase and conveyance of about 40 acres located on the southeast corner of the Fitzsimons campus. (See fig. 3.) According to FRA officials, this piece of land is currently owned by FRA and three other entities. According to a VA official, in February 2006, VA offered FRA $16.50 per square foot for the FRA-owned portion of the land. (VA is in the process of surveying the land to determine the total square footage.) The VA official responsible for the land negotiations at Fitzsimons told us that VA’s offer is valid for 6 months, and that VA expects to finalize the purchase of the FRA- owned portion of the land by the end of this fiscal year. VA is currently negotiating with the other three land-holding entities about the purchase of their land. According to the VA official, he does not foresee any “show stoppers” in the negotiations with these three entities, and therefore VA expects to reach agreement with these entities in the coming months. Negotiations between VA and UCH on the different joint venture proposals were hampered by a lack of VA leadership buy-in and miscommunication. For instance, although VA officials at the facility and network levels worked with UCH officials in developing the joint facility proposal, the current network director told us that the Secretary was never fully supportive of this concept. Rather, according to the network director, the Secretary envisioned a stand-alone facility adjacent to the UCH complex. When VA decided to pursue a stand-alone facility, UCH officials said they felt as though they had been misled by VA officials, including the Secretary, about VA’s interest in a joint facility. Further, in a correspondence from the UCH president to VA in 2004, the UCH president noted that a freestanding VA medical center on the Fitzsimons campus was never discussed. UCH officials also told us that at no time did UCH ever consider a freestanding facility for VA on its new campus because there would be limited opportunities for sharing capital and operating costs. In addition, there was miscommunication about the amount of land available for a federal tower and VA’s space requirements. Specifically, UCH officials told VA officials that there were about 18 acres available for the federal tower; however, the survey revealed that only a little more than 12 acres were available. In addition, in December 2004, the Secretary of VA informed UCH that VA needed 1.46 million square feet for its new facility. According to UCH officials, these space requirements ran counter to estimates that were discussed with VA facility and network level officials and, according to the UCH president in 2004, would result in a facility that was about 50 percent larger than the existing VA medical center in Denver. These events contributed to an atmosphere of mistrust between VA and UCH. VA has not made a decision regarding a joint venture with MUSC. Negotiations between VA and MUSC have stretched over a number of years and have been hampered by limited collaboration and communication among the parties. VA’s Under Secretary for Health and the president of MUSC are currently considering the results of a recent report that identifies and analyzes options for sharing facilities and space in Charleston. Figure 4 provides a time line of key events in the negotiations between VA and UCH. In November 2002, the president of MUSC sent a proposal to the Secretary of VA about partnering with MUSC in the construction and operation of a new medical center in phase II of MUSC’s construction project. Under MUSC’s proposal, VA would vacate its current facility and move to a new facility located on MUSC property. MUSC also indicated that sharing medical services would be a component of the joint venture. Although VA and MUSC currently share some services, the joint venture proposal, according to MUSC officials, would have increased the level of sharing of medical services and equipment, thereby creating cost savings for both VA and MUSC. To meet the needs of a growing and aging patient population, MUSC has undertaken a multiphase construction project to replace its aging medical campus. Construction on the first phase began in October 2004. Phase I includes the development of a 4-story diagnostic and treatment building and a 7-story patient hospitality tower, providing an additional 641,000 square feet in clinical and support space. Phase I also includes the construction of an atrium connecting the two buildings, a parking structure, and a central energy plant. Initial plans for phases II through V include diagnostic and treatment space and patient bed towers. According to MUSC officials, as of September 2005, there are approximately 24 months remaining for the planning of phase II. As shown in figure 5, phases IV and V would be built on VA property. In particular, phase V would be built on the site of VA’s existing medical center. In response to MUSC’s proposal, VA formed an internal workgroup composed of officials primarily from VA’s Network 7 to evaluate MUSC’s proposal. The workgroup analyzed the feasibility and cost-effectiveness of the proposal and issued a report in March 2003, which outlined three other options available to VA: replacing the Charleston facility at its present location, replacing the Charleston facility on land presently occupied by the Naval Hospital in Charleston, or renovating the Charleston facility. The workgroup concluded that it would be more cost-effective to renovate the current Charleston facility than to replace it with a new facility. This conclusion was based, in part, on the cost estimates for constructing a new medical center. In April 2003, the Secretary of VA sent a response to the president of MUSC, which stated that if VA agreed to the joint venture, it would prefer to place the new facility in phase III—which is north of phase I—to provide better street access for veterans. (See fig. 6 for MUSC’s proposal and VA’s counterproposal.) In addition, the Secretary indicated that MUSC would need to provide a financial incentive for VA to participate in the joint venture. Specifically, MUSC would need to make up the difference between the estimated life-cycle costs of renovating the Charleston facility and building a new medical center—which VA estimated to be about $85 million—through negotiations or other means. The Secretary stated that if these conditions could not be met, VA would prefer to remain in its current facility. The MUSC president responded to VA’s counterproposal in an April 2003 letter to the Secretary of VA. In the letter, the MUSC president stated that MUSC was proceeding with phase I of the project and that the joint venture concept could be pursued during later phases of construction. The letter did not specifically address VA’s proposal to locate the new facility in phase III, or the suggestion that MUSC would need to provide some type of financial incentive for VA to participate in the joint venture. To move forward with phase I, the MUSC president stated that MUSC would like to focus on executing an enhanced-use lease (EUL) for Doughty Street. Although MUSC owns most of the property that will be used for phases I through III, Doughty Street is owned by VA and serves as an access road to the Charleston facility and parking lots. The planned facility for phase I would encompass Doughty Street. (See fig. 7.) Therefore, MUSC could not proceed with phase I—as originally planned—until MUSC secured the rights to Doughty Street. To help its medical affiliate move forward with construction, VA executed an EUL agreement with MUSC in May 2004 for use of the street. According to the terms of the EUL, MUSC will pay VA $342,000 for initial use of the street and $171,000 for each of the following 8 years. To facilitate negotiations on the joint venture proposal, a congressional delegation visited Charleston to meet with VA and MUSC officials to discuss the joint venture proposal on August 1, 2005. After this visit, VA and MUSC agreed to jointly examine key issues associated with the joint venture proposal. Specifically, VA and MUSC established the Collaborative Opportunities Steering Group (steering group). The steering group is composed of five members from VA, five members from MUSC, and a representative from DOD, which is also a stakeholder in the facility health care market. The steering group chartered four workgroups: The governance workgroup examined ways of establishing organizational authority within a joint venture between VA and MUSC, including shared medical services. The clinical service integration workgroup identified medical services provided by VA and MUSC and opportunities to integrate or share these services. The legal workgroup reviewed federal and state authorities (or identified the lack thereof) and legal issues relating to a joint venture with shared medical services. The finance workgroup provided cost estimates and analyses relating to a joint venture with shared medical services. The steering group and workgroups were intended to help VA and MUSC determine if the joint venture proposal would be mutually beneficial. On December 7, 2005, the steering group issued its final report to the Under Secretary for Health and to the president of MUSC. According to the report, the steering group concluded that the most advantageous options were those that provide a revenue stream for VA and provide MUSC access to new space without capital financing. Therefore, the group explored construction models that incorporated benefits to both organizations that included taking advantage of VA’s access to capital financing and access to MUSC revenue streams. As shown in table 3, the report identifies six planning models, ranging from constructing a new medical facility with space for VA and MUSC, to sharing that could occur with VA maintaining its existing facility. Four of the models—A, A-1, A-2, and B—include varying levels of shared space between VA and MUSC. These four models also call for VA to overbuild the facility—that is, build it bigger than VA needs—and lease the excess space to MUSC, thus providing VA with a revenue stream to offset some of the cost of construction. The amount of excess space built and leased by VA varies among the four models. Any option pursued that involves VA building a new medical facility over-capacity for the purpose of leasing the underutilized space requires close scrutiny, since real estate leasing agreements are currently not part of its mission. In addition, such options would also require specific congressional authorization and appropriation since the costs of any of the planning models identified would exceed $7 million, the threshold for such action. The steering group’s December 2005 report does not recommend an option that VA should pursue. Rather, the report outlines the perceived advantages and disadvantages, as well as the costs, of each option. (See app. I for the advantages, disadvantages, and costs of the different models.) However, the report does note that two options were rejected by steering group members. In particular, the finance workgroup rejected Model A-2—which included an oversized new VA medical center and separate buildings for administrative and clinical services—because, among other things, the construction of a separate building to house administrative services was not cost-effective. Additionally, MUSC deemed Model B—which included a replacement VA medical center with moderate excess space to lease to MUSC—not to be a viable option because it did not meet its total bed replacement needs. Although the report identifies options that provide a revenue stream for VA, the report notes that there is not sufficient revenue or cost avoidance in any of the models for VA to achieve a 30-year payback on the construction investment. According to VA officials, the next step is for the Capital Asset Board of the Veterans Health Administration to make a recommendation regarding the options contained in the report. VA expects the Capital Asset Board’s recommendations by the end of April 2006. Prior to the summer of 2005, limited collaboration and communication generally characterized the negotiations between MUSC and VA over the joint venture proposal. In particular, before August 2005, VA and MUSC had not exchanged critical information that would help facilitate negotiations. For instance, MUSC did not clearly articulate to VA how replacing the Charleston facility, rather than renovating it, would improve the quality of health care services for veterans or benefit VA. MUSC officials had generally stated that sharing services and equipment would create efficiencies and avoid duplication, which would lead to cost savings. However, MUSC had not provided any analyses to support such claims. Similarly, as required by law, VA studied the feasibility of coordinating its health care services with MUSC, pending construction of MUSC’s new medical center. This study was completed in June 2004. However, VA officials did not include MUSC officials in the development of the study, nor did they share a copy of the completed study with MUSC. VA also updated its cost analysis of the potential joint venture in the spring of 2005, but again, VA did not share the results with MUSC. Because MUSC was not included in the development of these analyses, there was no agreement between VA and MUSC on key input for the analyses, such as the specific price MUSC would charge VA for, or the nature of, the medical services that would be provided. As a result of the limited collaboration and communication, negotiations stalled—prior to August 2005, the last formal correspondence between VA and MUSC leadership on the joint venture occurred in April 2003. The joint venture proposals under consideration in Charleston and previously proposed in Denver raise a number of challenges for VA and its medical affiliates. These challenges—which were identified by VA, MUSC, or UCH officials as well as previous studies prepared for or by VA, MUSC, or UCH—include addressing institutional changes for VA and institutional differences between VA and its medical affiliates, identifying legal issues and seeking legislative remedies, and balancing funding priorities. Although addressing these issues will be difficult, it is not insurmountable, as evidenced by the VA-MUSC steering group’s efforts to address some of these challenges, as well as by VA’s past partnerships with some medical affiliates and DOD. Addressing institutional changes and differences: The joint ventures proposed in Charleston and Denver pose a series of institutional changes for VA and reveal a number of institutional differences between VA and its medical affiliate that would need to be reconciled. Specifically, as an in-house health care service provider with other departmental priorities, by jointly constructing and operating a hospital with a nonfederal health system, VA would deviate from its current health care model. Although VA purchases significant amounts of medical services from its medical affiliates, the relationship between VA and its affiliates has centered on providing enhanced care for veterans as well as training medical school residents and conducting medical research. According to VA, altering this historical relationship to include jointly constructing and operating facilities would introduce legal, administrative, and management complexities that might require additional authorities. In addition, according to VA and some stakeholders, a joint facility could diminish VA’s identity by deviating from a VA medical facility that treats only veterans to one with a mixed- patient population served by providers from different health systems. Hence, if maintaining VA’s identity is important to VA leadership, steps would need to be taken to protect VA’s identity in a joint facility. Adding to the challenge of expanding affiliation relationships to include joint ventures involving major capital are inherent differences between VA and its medical affiliates—from their missions to their funding processes. For example, in addition to its mission of providing care for our nation’s veterans, VA is also responsible for supporting national, state, and local emergency management and serving as backup to DOD during war and other national emergencies. In addition, funding decisions for both VA and MUSC must go through several layers of review. VA’s major capital investments (over $7 million) must be evaluated at multiple levels within VA and approved by the Office of Management and Budget and by Congress, while such investments by MUSC must be approved by its board, and if requiring state funds, by the state legislature. These differences would need to be considered in any joint venture between VA and a medical affiliate. Identifying legal issues and seeking legislative remedies: Joint venture proposals raise many complex legal issues. The specific legal issues raised depend on the type of joint venture proposed, but many involve real estate, construction, contracting, and employment. In Charleston, the legal workgroup identified VA’s and MUSC’s legal authorities, or lack thereof, on numerous issues relating to each option considered. The legal workgroup concluded that VA has the legal authority to pursue any of the six planning models identified but that specific considerations would arise for each model. For example, legislative authorization and appropriation are required for any major VA construction project over $7 million. In addition, while VA is authorized, under its EUL authority, to lease underutilized real property for up to 75 years, the authorization does not provide for building a new medical facility over-capacity for the purpose of leasing the underutilized space. Developing appropriate governance plans: A venture involving a jointly operated facility would require the parties to agree to a plan for governing it. Any governance plan would have to maintain VA’s direct authority over and accountability for the care of VA patients. In addition, if shared medical services were a component of a joint venture between the VA and an affiliate, the entities would need a mechanism to ensure that the interests of the patients served by both are protected today and in the future. For instance, VA might decide to purchase operating room services from its affiliate. If the sharing agreement were dissolved afterwards, it would be difficult for VA to resume the independent provision of these services. Therefore, a clear plan for governance would ensure that VA and its affiliate could continue to serve their patients’ health care needs as well as or better than before. To address possible governance issues in Charleston, the steering group recommended instituting a joint governance council that would include a nonaffiliated third party to oversee the sharing relationship in areas other than research and educational activities. The joint governance council’s decisions would be advisory in nature—and not legally binding—in order not to undermine the current authority of VA or MUSC. Balancing funding priorities: VA leadership must weigh joint venture opportunities against VA’s capital assets and health care service needs throughout the nation when making funding decisions and recommendations. VA operates a nationwide health care system for veterans, including 157 medical centers and over 800 clinics. According to VA, its capital requirements are significant given the amount of real property it owns and uses and the age and condition of most of its facilities. Further, in 2004, the Secretary of VA estimated that implementing CARES will require additional investments of approximately $1 billion per year for at least the next 5 years, with substantial infrastructure investments then continuing indefinitely. Balancing these competing capital requirements is made more difficult by the fiscal challenges facing the federal government. Given the size of the government’s projected deficit, VA, like other federal agencies, could face constrained budgets in the future, making funding of even high priority capital requirements challenging. Additional challenges are likely to be identified as VA continues to explore the proposed joint venture with MUSC or other possible joint ventures in the future. In particular, should VA decide to pursue a joint venture with MUSC or other medical affiliates in the future, it would likely face additional challenges during the implementation phase. For example, due to the inherent differences in the purposes for which VA’s and MUSC’s information management systems were designed, the systems would not be compatible. According to MUSC officials, VA’s and MUSC’s computerized patient record systems are different, and their billing systems are incompatible. Therefore, at least initially, the systems would not be integrated, and parallel systems would need to be implemented—which could result in added costs in terms of staff time and raise the potential for errors. Partnerships with other health providers are not new to VA. For instance, the Mike O’Callaghan Federal Hospital, an integrated federal hospital jointly constructed by the VA and Air Force in Las Vegas, Nevada, currently serves as a model of joint operation and shared medical services. However, joint ventures of this magnitude with DOD are limited. Further, VA has not entered into a joint venture with a medical affiliate of the magnitude proposed in Charleston or Denver. However, there are instances of significant capital ventures between VA and its affiliates involving high- priced medical equipment. For example, VA’s Western New York Healthcare System in Buffalo, New York, houses a Positron Emission Tomography (PET) scanner that was purchased by its affiliate. In exchange, VA purchases scans from its affiliate for veterans and provides operational and administrative staff to support the equipment. These past capital ventures are on a smaller scale than the joint ventures proposed in Charleston and Denver, but they could be somewhat instructive as VA considers current and future joint venture proposals and attempts to address the associated challenges. For example, in these past capital ventures, VA had to ensure that veterans received the appropriate access to equipment and services, and VA accomplished this through the terms and conditions outlined in the contract. In addition, VA had to address governance, legal, and information management challenges in establishing these capital-sharing arrangements. The difficulty of addressing such challenges, however, likely increases as the complexity and magnitude of the proposed joint venture grows. Because VA may explore the possibility of entering into partnerships with other medical affiliates in the future, the lessons learned from VA’s experiences in Charleston and Denver could be instructive. It is possible that more opportunities for similar joint ventures or sharing arrangements will present themselves in the coming years. In particular, our analysis of VA data on its major medical facilities indicates that 43 percent of these facilities, like the medical center in Denver, consist of buildings with an average age of over 50 years, although some have undergone extensive renovations over the years. Given the age of these facilities, many of them may need to be replaced or extensively renovated in the future. Additionally, disasters, such as Hurricane Katrina, could force unplanned renovations or replacements. As VA moves forward in making necessary renovations or replacements throughout the country, there could be opportunities for joint ventures with its medical affiliates. VA will have to determine if these opportunities are in the best interest of the federal government and our nation’s veterans. The lessons that emerged from our work in Charleston and Denver reflect how the absence of practices that we have emphasized in previous reports can hamper effective consideration of potential joint ventures. These reports examine leading practices for realigning federal agency infrastructure, collaboration among organizations, and organizational transformations. The lessons include establishing criteria to evaluate the joint venture proposal, obtaining leadership buy-in and support for the joint venture, ensuring extensive collaboration among stakeholders, and developing a strategy for effective and ongoing communications. One of the most important lessons from VA’s experiences in Denver and Charleston is that the absence of criteria at the departmental level to evaluate joint venture proposals can result in inconsistent evaluations, misunderstandings, and delays. The joint venture proposals for VA’s medical centers in Denver and Charleston presented VA with a new opportunity—that is, the proposals involved joint construction and service sharing on a scale beyond anything VA had experienced in partnering with its medical affiliates in the past. VA did not have criteria at the departmental level for evaluating and negotiating joint venture proposals, which led to inconsistent evaluations of the Denver and Charleston proposals. For instance, in Denver, VA facility and network officials worked collaboratively with UCH officials on the joint venture proposals, including jointly funding a study to assess the feasibility and cost of various options. In contrast, VA facility and network officials did not include MUSC officials in the development of the study that examined the feasibility of coordinating VA’s health care services with MUSC, nor did they share a copy of the completed study with MUSC. This contributed to the negotiations between VA and MUSC stalling for over 2 years. VA officials in Denver also told us that the lack of departmental criteria hampered negotiations, and noted that on the basis of their experience a common tool or process is needed to assess joint venture proposals so that they can be evaluated consistently. As we have emphasized in previous work on realigning federal infrastructure, a set of criteria for evaluating decisions regarding infrastructure enhances the transparency of these decisions and helps ensure that the decisions are made in a manner that is fair to all stakeholders and that is efficient and effective. Although we recognize that every joint venture is likely to be different, criteria would establish a framework for evaluating future joint venture proposals. In addition to identifying the factors VA would consider in evaluating proposals and indicating how these factors would be measured, the criteria would help ensure that proposals are evaluated consistently—regardless of location or officials involved. The criteria would also serve to communicate VA’s expectations for joint ventures. That is, they would identify what VA is looking for in potential joint ventures, such as improved medical care for veterans and reduced operating costs. By documenting and sharing these criteria with potential partners, VA would help ensure that its positions are understood from the outset and thus eliminate possible misunderstandings. The VA-MUSC steering group’s efforts to identify criteria to evaluate the Charleston proposal and the studies conducted in Denver could serve as starting points for the development of criteria. VA’s experiences in Denver and Charleston highlighted the fact that the absence of sustained communication with potential joint venture partners and stakeholders as well as within VA can be detrimental to negotiations. Breakdowns in communication occurred in both locations during key points of the negotiations and hindered progress. For example, in Charleston, there was limited communication between VA and MUSC for about 2 years; as a result, negotiations stalled. In addition, in both locations, a primary point of contact—either a single individual or a group—was not identified to represent VA’s position in negotiations with the medical universities. Rather, various VA officials at the facility, network, and departmental levels often maintained separate contacts with UCH and MUSC officials. As a result, according to MUSC and UCH officials, they received mixed signals as to VA’s intentions regarding the proposals. Similarly, MUSC and UCH also contacted and communicated with different VA officials at the facility, network, and departmental levels, which also led to confusion. In our previous work on organizational transformations, we have noted that creating an effective, ongoing communication strategy is essential to implementing significant organizational changes like the joint ventures proposed in Charleston and Denver. Such a strategy should entail communicating information early and often to help build an understanding of the purpose of the planned change and build trust among VA and its medical affiliates as well as stakeholders, such as employees and veterans, who could have concerns over such issues as the impact of a joint venture on patient care. The strategy should also encourage communication by facilitating a two-way honest exchange with, and allow for feedback from, stakeholders. A communications strategy can also help ensure that these groups receive a message that is consistent in tone and content. Sharing a consistent message with stakeholders helps reduce the perception that others are getting the “real” story when, in fact, all are receiving the same information. The strategy should also make it clear that it is essential to have a primary point of contact with the necessary authority to negotiate effectively with partners, make timely decisions, and move quickly to implement top leadership’s decisions regarding the joint venture. Good communication is central to forming the effective internal and external partnerships that are vital to the success of transforming endeavors such as joint ventures. In Charleston, the steering group has taken steps to improve communication by establishing a plan for VA and MUSC to share information about the potential joint venture with stakeholders such as employees and veterans groups. Another lesson that emerged from VA’s experience with the joint venture proposals for Denver and Charleston is that leadership buy-in and support are critical. The proposed joint venture in Denver did not come to fruition largely because VA leadership never fully supported the concept. In particular, when the joint venture was first proposed, UCH and VA officials at the network and facility levels worked extensively together on the proposal. Top level VA management, however, was not involved in these efforts. Moreover, in response to UCH’s request for a 1-year time frame for a decision regarding a joint facility, in October 2002, the VA Secretary wrote that VA “cannot now commit to a joint University-VA hospital within the one-year timetable you propose. However, I feel strongly that we should not preclude a freestanding VA medical center at Fitzsimons in the future.” According to UCH officials, this decision was unexpected given the fact they had worked closely with VA facility officials on a possible joint venture. Certainly it is the VA Secretary’s prerogative to extend or withhold support for different proposals, and the Secretary must determine whether the proposals are in the best interest of veterans. However, VA’s experiences in Denver and Charleston indicate that without such support negotiations for joint ventures will be hampered. Our previous work on organizational transformation indicates that support from top leadership is indispensable for fundamental change, such as a joint venture. Top leadership’s clear and personal involvement in the transformation represents stability for both the organization’s employees and its external partners. Top leadership must set the direction, pace, and tone for the transformation. Likewise, when a transformation requires extensive collaboration with another organization, as would be the case with a joint venture, committed leadership at all levels is needed to overcome the many barriers to working across organizational boundaries. If VA decides to pursue a joint venture with MUSC in Charleston, or other similar projects with medical affiliates or other partners, success will hinge on the level of support the project receives from top VA management. Another lesson that emerged from the experiences in Denver and Charleston is that a lack of, or limited, collaboration hampers negotiations. For example, in Charleston, VA and MUSC did not initially exchange or share critical information, such as the feasibility study, which contributed to the negotiations stalling from about 2003 to 2005. In addition, until the VA-MUSC steering group was formed in Charleston, there was limited collaboration between VA and its stakeholders. This heightened the stakeholders’ anxiety about the proposed joint venture and led to the spread of misinformation about the proposed joint venture. In Denver, although VA officials from the facility and network level and UCH officials met frequently after UCH proposed the joint venture, VA officials with the necessary decision-making authority were not involved in these initial discussions. Consequently, when the Secretary of VA decided against a joint venture in Denver, UCH officials felt misled, which resulted in an atmosphere of mistrust between the entities. Our previous work on collaboration between organizations suggests several practices that VA might benefit from as it continues to consider a joint venture in Charleston as well as other such opportunities that may occur in the future. These practices include ensuring the involvement of key stakeholders, defining and articulating a common outcome, establishing mutually reinforcing or joint strategies, identifying and addressing needs by leveraging resources, and agreeing on roles and responsibilities. The VA-MUSC steering group illustrates how some of these practices can be implemented. For example, the steering group was led by senior VA and MUSC officials and consisted of VA and MUSC staff who have knowledge in key areas (e.g., finance). In addition, the communications plan the VA-MUSC steering group established includes a presentation to use when communicating with stakeholders about the joint venture proposal. To address future health care needs of veterans, VA’s challenge is to explore new ways to fulfill its mission of providing veterans with quality health care. The prospect of jointly constructing and operating medical facilities with medical affiliates presents an opportunity for VA to consider the feasibility of expanding its relationships with university medical school affiliates to include the sharing of medical services in an integrated hospital. This is just one of several ways VA could provide care to veterans. It is up to VA, working with its stakeholders, and Congress to determine if expanding VA’s relationship with medical affiliates to include joint ventures—of the scale proposed in Denver and Charleston—is in the best interest of the federal government and the nation’s veterans, as well as how such joint ventures fit within the context of the CARES framework. VA will be in a better position to consider future joint ventures if it learns from its experiences with the joint venture proposals in Denver and Charleston. Among these lessons is the importance of leadership support and extensive collaboration. In addition, VA’s experiences in Denver and Charleston indicate that having a set of criteria at the departmental level would provide a clear basis for making decisions on joint venture proposals. Although each proposal will likely be somewhat unique, and should be evaluated on its own merits and circumstances, criteria provide a framework for future evaluations and negotiations. A set of criteria at the departmental level helps ensure that proposals are evaluated in a consistent fashion across the country as well as communicates VA’s expectations for joint ventures. Another important lesson is that a strategy for communicating with its medical affiliates and stakeholders, including veterans and employees, can help VA avoid the problems that hampered progress in negotiations over the Denver and Charleston joint venture proposals. A communications strategy helps build understanding and trust between VA and its medical affiliates and stakeholders as well as helps ensure that these groups receive a message that is consistent in tone and content. Establishing a set of evaluation criteria and a communications strategy are tangible steps VA could take to better position itself in considering future joint venture proposals. To ensure that there is a clear basis for evaluating future joint venture proposals as well as to help ensure early and frequent communication between VA and its medical affiliates and stakeholders during negotiations, we recommend that the Secretary of VA take the following two actions: Identify criteria at the departmental level for evaluating joint venture proposals. In order to foster an atmosphere of collaboration, VA should share these criteria with potential joint venture partners. Develop a communications strategy for use in negotiating joint venture proposals. We provided a draft of this report to VA for its review and comment. On April 10, 2006, VA’s audit liaison provided VA’s comments on the draft report via e-mail. VA agreed with the report’s conclusions and recommendations. We also provided UCH and MUSC officials portions of the draft report that related to their joint venture proposals. UCH and MUSC officials provided technical clarifications to these portions of the draft report, which we incorporated where appropriate. To address our objectives, we analyzed VA, UCH, and MUSC planning documents, presentations, and studies related to the joint venture proposals as well as correspondence between VA and these medical affiliates regarding the proposals. We also examined the recommendations of the CARES Commission and the Secretary’s CARES Decision report, VA’s 5-year capital plan (2005-2010), and federal statutes and accompanying reports. In addition, we interviewed officials from VA, DOD, MUSC, and UCH to obtain information on the history and status of the joint venture proposals as well as the challenges associated with implementing such proposals. We also interviewed local stakeholders, including officials from the Fitzsimons Redevelopment Authority in Aurora, Colorado, the mayors of Charleston and Aurora, and representatives from the VA employees’ unions in each location to obtain their perspectives and to obtain information on local capital asset planning and its impact. We also toured VA and MUSC facilities in Charleston and VA and UCH facilities in the Denver area. Finally, we synthesized information obtained from VA, MUSC, and UCH officials and reviewed our past work on organizational transformation and collaboration among organizations to identify lessons learned from VA’s experiences with joint venture proposals in Charleston and Denver. Although we examined the joint venture proposals for VA’s Denver and Charleston facilities and the associated studies and planning documents, we did not evaluate the merits of the proposals. We assessed the reliability of the information obtained from VA, MUSC, and UCH. We concluded that the information was sufficiently reliable for our purposes. We are sending copies of this report to congressional committees with responsibilities for veteran issues; the Secretary of Veterans Affairs; and the Director, Office of Management and Budget. We also will make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact me on (202) 512-2834 or at goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report include Chris Bonham, Nikki Clowers, Daniel Hoy, Jennifer Kim, Edward Laughlin, Susan Michal-Smith, James Musselwhite, Jr., and Michael Tropauer. Model A: Construct a new, oversized VA medical center to replace all VA services. Excess capacity is leased to MUSC. MUSC construction must meet VA security requirements, raising construction costs Model A-1: Construct a new, oversized VA medical center to replace all VA services. Excess capacity is leased to MUSC. MUSC would construct an adjacent tower. Eliminates some VA lease revenue and negatively affects payback Model A-2: Construct a new, oversized VA medical center to replace all VA services, with administrative and clinical services located in separate buildings. Excess capacity is leased to MUSC. MUSC construction must meet VA security requirements, raising construction costs Model B: Construct a new, slightly oversized VA medical center to replace all VA services. Excess capacity is leased to MUSC. MUSC construction must meet VA security requirements, raising construction costs Model C: Construct a new VA medical center, with no excess space available for leasing. Additional sharing between VA and MUSC consists of shared high tech equipment and contracts for services. Maintains greater VA and MUSC autonomy Avoids further investment into VA’s aging infrastructure Model D: VA remains in its current facility, with renovations as appropriate. Additional sharing between VA and MUSC consists of shared high tech equipment and contracts for services. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005. VA Health Care: Preliminary Information on the Joint Venture Proposal for VA's Charleston Facility. GAO-05-1041T. Washington, D.C.: September 26, 2005. VA Health Care: Key Challenges to Aligning Capital Assets and Enhancing Veterans’ Care. GAO-05-429. Washington, D.C.: August 5, 2005. Federal Real Property: Further Actions Needed to Address Long-standing and Complex Problems. GAO-05-848T. Washington, D.C.: June 22, 2005. U.S. Postal Service: The Service’s Strategy for Realigning Its Mail Processing Infrastructure Lacks Clarity, Criteria, and Accountability. GAO-05-261. Washington, D.C.: April 8, 2005. VA Health Care: Important Steps Taken to Enhance Veterans’ Care by Aligning Inpatient Services with Projected Needs. GAO-05-160. Washington, D.C.: March 2, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. Budget Issues: Agency Implementation of Capital Planning Principles Is Mixed. GAO-04-138. Washington, D.C.: January 16, 2004. Federal Real Property: Vacant and Underutilized Properties at GSA, VA, and USPS. GAO-03-747. Washington, D.C.: August 19, 2003. VA Health Care: Framework for Analyzing Capital Asset Realignment for Enhanced Services Decisions. GAO-03-1103R. Washington, D.C.: August 18, 2003. Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations. GAO-03-669. Washington, D.C.: July 2, 2003. VA Health Care: Improved Planning Needed for Management of Excess Real Property. GAO-03-326. Washington, D.C.: January 29, 2003. Major Management Challenges and Program Risks: Department of Veterans Affairs. GAO-03-110. Washington, D.C.: January 2003. High-Risk Series: Federal Real Property. GAO-03-122. Washington, D.C.: January 2003. VA Health Care: VA Is Struggling to Address Asset Realignment Challenges. GAO/T-HEHS-00-88. Washington, D.C.: April 5, 2000. VA Health Care: Improvements Needed in Capital Asset Planning and Budgeting. GAO/HEHS-99-145. Washington, D.C.: August 13, 1999. VA Health Care: Challenges Facing VA in Developing an Asset Realignment Process. GAO/T-HEHS-99-173. Washington, D.C.: July 22, 1999. VA Health Care: Capital Asset Planning and Budgeting Need Improvement. GAO/T-HEHS-99-83. Washington, D.C.: March 10, 1999. | The Department of Veterans Affairs (VA) maintains affiliations with medical schools, including the Medical University of South Carolina (MUSC) and the University of Colorado at Denver and Health Services Center and University of Colorado Hospital (UCH), to obtain enhanced medical care for veterans. As part of their plans for new medical campuses, both UCH and MUSC proposed jointly constructing and operating new medical facilities with VA in Denver and Charleston, respectively. This report discusses (1) how VA evaluated the joint venture proposals for Denver and Charleston and the status of these proposals, (2) the challenges these proposals pose for VA, and (3) the lessons VA can learn from its experiences in Charleston and Denver for future partnerships. VA evaluated the joint venture proposals for its medical facilities in Denver and Charleston using criteria developed specifically for each location, and while VA opted to build a stand-alone facility in Denver, it is still considering a joint venture in Charleston. Because the proposals involved joint construction and service sharing on a scale beyond anything VA had experienced with its medical affiliates in the past, VA did not have criteria at the departmental level to evaluate the proposals on a consistent basis in both locations. In both locations, negotiations between VA and its medical affiliates stretched over a number of years, in part because they were hampered by limited collaboration and communication, among other things. While VA decided against a joint venture in Denver, it has made no decision on Charleston. A VA-MUSC steering group, formed last summer to study the joint venture proposal in Charleston, issued a report in December 2005 that outlined the advantages and disadvantages of different options. The joint ventures proposed in Denver and Charleston present a number of challenges to VA, including addressing institutional differences between VA and its medical affiliates, identifying legal issues and seeking legislative remedies, and balancing funding priorities. For example, capital expenditures for a joint venture would have to be considered in the context of other VA capital priorities. Although addressing these issues will be difficult, the VA-MUSC steering group's efforts could provide insight into how to tackle them. VA's experiences with joint venture proposals in Denver and Charleston offer several lessons for VA as it considers similar opportunities in the future. One of the most important lessons is that having criteria at the departmental level to evaluate joint venture proposals helps to improve the transparency of decisions concerning joint ventures and VA's ability to ensure that the decisions are made in a consistent manner across the country. Another key lesson is that having a strategy for communicating with stakeholders, such as employees and veterans, helps VA build understanding and trust among stakeholders. The following table identifies these and other lessons from VA's experiences in Denver and Charleston. |
Manufacturer drug coupon programs reduce or eliminate out-of-pocket costs for specific drugs and are typically available to privately insured patients regardless of income. Drug manufacturers provide these discounts to patients through several mechanisms. For example, manufacturers may provide patients with debit cards to be activated at the point of sale. Alternatively, manufacturers may pay the patient’s coupon discount amount directly to a provider, who would reduce the patient’s out-of-pocket cost accordingly. Manufacturers inform patients and providers of coupon programs in a variety of ways, such as distributing promotional materials, operating program websites and patient hotlines, and sending field representatives to communicate program information to providers. The effect of coupon programs on patients can differ depending on whether programs are associated with single-source or multi-source drugs, and changes in patient behavior may in turn lead to increased drug sales by manufacturers. For single-source drugs, which are only available from one manufacturer and may not have lower-cost, pharmaceutically equivalent alternatives, such programs can help patients afford their medications and have been shown to improve patient adherence to specialty drug regimens. For multi-source drugs, which are available from more than one manufacturer, coupon programs may encourage patients to request, and providers to prescribe, more expensive drugs instead of generics and other lower-cost, pharmaceutically equivalent alternatives. These changes in patient behaviors could benefit drug manufacturers financially while potentially increasing costs for health insurers. Specifically, manufacturers gain revenue from the sale of drugs received by patients who might have quit a drug regimen or chosen a lower-cost alternative in the absence of a coupon program. Additionally, manufacturers may be able to charge higher prices to purchasers than the market could sustain without these programs. Although the use of drug coupon programs to induce or reward use of certain drugs is unlawful in federal health care programs such as Medicare, beneficiaries who cannot afford their medications may be eligible to obtain financial assistance from other sources. For example, Medicare beneficiaries may be able to receive medications or assistance with out-of-pocket costs from independent charity patient assistance programs. Medicare beneficiaries with low income may also be eligible to enroll in Medicaid, the joint federal-state program that finances health insurance coverage for certain categories of low-income and medically needy individuals. Medicare Part B covers drugs and biologicals that are generally administered by a physician or under a physician’s direct supervision, including drugs administered in a physician’s office or hospital outpatient department. Drugs covered under Part B include injectable drugs, oral cancer drugs if the same drug is available in injectable form, and drugs infused or inhaled through durable medical equipment. Medicare and its beneficiaries make payments for Part B drugs to providers, such as physicians and hospitals, which first purchase the drugs from manufacturers or other sellers. Medicare generally pays 80 percent of a set payment rate for a drug, while beneficiaries are responsible for the remaining 20 percent. For most Part B drugs, Medicare sets payment rates at a drug’s ASP plus an additional 6 percent. To set these rates, CMS collects quarterly data from drug manufacturers on the volume of sales and ASP for each drug. Sales data that manufacturers report must be net of all rebates, discounts, and other price concessions to purchasers, including physicians, hospitals, and wholesalers. Manufacturers are not required to report sales net of coupon discounts or other financial assistance provided by manufacturers directly to patients. CMS, as part of its ongoing efforts to evaluate Medicare’s methodology for setting Part B drug payment rates, recently issued a proposed rule to test alternatives to this payment method. Various studies have pointed out that Medicare’s current methodology for setting Part B drug payment rates as a fixed percentage above ASP may give providers a financial incentive to prescribe more expensive drugs. This is among the shortcomings of the current ASP-based payment method that CMS’s proposed payment model is designed to address. The first phase of the proposed payment model would change the payment rate for drugs paid based on ASP from ASP plus 6 percent to ASP plus 2.5 percent plus a flat fee. The second phase would implement value-based pricing strategies, such as varying prices based on drugs’ clinical effectiveness and decreasing beneficiary coinsurance for drugs deemed high in value. In 2015, drug manufacturers offered coupon programs to privately insured patients for 29 of the 50 high-expenditure Medicare Part B drugs in our analysis. Coupon programs were typically open to privately insured patients regardless of income. Programs for 3 of the drugs required patients to have incomes below a certain amount, with maximum annual incomes of approximately $100,000. Coupon programs varied in the discount amounts that patients could receive in 2015. Most programs had a maximum annual discount, which ranged across programs from $400 to $42,000 per year. Until patients reached that maximum discount, they could pay as low as $0 to $50 per coupon use. These amounts could represent a small fraction of patients’ full out-of-pocket cost for a prescription. For example, privately insured patients using the drug Yervoy were required to pay an estimated $571, on average, per prescription without the drug’s coupon program, compared to paying $25 under the coupon program. (See table 1 for examples of drug coupon programs and app. III for more information on out-of-pocket costs for drugs with coupon programs.) Factors that can affect privately insured patients’ use of coupons include patient out-of-pocket cost requirements and the extent of manufacturer outreach to patients and providers. For example, whether patients are required to pay out-of-pocket costs for a drug can affect whether patients use coupon programs, as patients without such costs do not need these programs. On average, across drugs in our analysis with coupon programs in 2013, we estimated that 50 percent of privately insured patients did not have out-of-pocket costs, and this percentage ranged from 8 to 76 percent, depending on the drug. The amount of patient out- of-pocket costs can also affect the discount amount patients receive, because coupon discounts are directly related to patients’ out-of-pocket costs. Other factors, including manufacturer outreach to providers and patients, could also explain variation in coupon program use. For example, some manufacturers told us that they reach out to patients directly regarding coupon programs, while others told us that they communicate directly only with providers. With respect to patients’ use of available drug coupon programs, we determined that 21 of the 50 high-expenditure Part B drugs had coupon programs in 2013, and these drugs accounted for 50 percent of Part B spending paid based on ASP. We were able to obtain data on coupon discounts from manufacturers for 18 of these drugs. An estimated 19 percent of the 509,000 privately insured patients who used these 18 drugs in 2013 also used a coupon program. The percentage of these patients who used a coupon program ranged from 1 to over 90 percent, depending on the drug, with coupon programs for all but 2 drugs being used by less than 40 percent of patients. Coupon discounts reported by manufacturers of the 18 drugs totaled $205 million in 2013. Individual patients who used coupon programs for these drugs received an average annual discount of $2,051. This discount ranged from $1,000 to over $7,000 per year for 13 of the 18 drugs and was $800 or less for the remaining 5 drugs. Medicare’s market-based methodology for setting Part B drug payment rates may be less suitable for drugs with coupon programs than for other Part B drugs that are paid based on ASP. Because ASP does not account for coupon discounts between manufacturers and patients, the ultimate consumers of these drugs, the ASP for drugs with coupons exceeds the effective market price a manufacturer receives for a drug purchase. For example, in figure 1, the ASP reported by the manufacturer was $1,000; however, the effective market price the manufacturer received for the drug—net of the coupon discount the manufacturer provided to the patient—was actually $300 less, or $700. We estimated that, for the 18 drugs for which we obtained coupon discount data in 2013, ASP exceeded the effective market price by an average of 0.7 percent. Medicare spending for these 18 drugs could have been $69 million lower if ASP had been equal to the effective market price manufacturers received. ASP exceeded the effective market price for some drugs by much more than the 0.7 percent average, which suggests that the ASP-based payment method may be even less suitable for these drugs. For example, for 5 of the 18 drugs, ASP exceeded the effective market price by an estimated 2.7 percent, on average, and ranged from 1.4 to 7.8 percent depending on the drug (see fig. 2). Part B spending for these 5 drugs combined could have been an estimated $50 million lower if ASP equaled the effective market price. The drugs for which ASP exceeded the effective market price by the highest percentage either had high rates of coupon use relative to other drugs, a high average annual discount, or both. For example, the drug in figure 2 with the highest percentage (Drug A) had the highest average annual discount per patient ($7,100) and the second highest percentage of patients who used a coupon (53 percent). (For more detail on the data and methodology for these estimates, see app. II.) Upward trends in the use of coupon programs suggest that drug coupons could have an even greater effect in the future on the suitability of Medicare’s methodology for setting Part B drug payment rates. A recent study found that coupon use more than doubled between 2011 and 2014. Several manufacturers we interviewed told us that the number of patients using coupon programs and the discount amounts that patients receive have increased over time. In addition, to the extent that drug prices continue to increase and translate into higher out-of-pocket costs for privately insured patients, this could increase patients’ use of drug coupons and the discount amounts they receive. CMS currently lacks data on coupon discounts, which are necessary for evaluating the implications of coupon programs for Medicare’s Part B payment rate methodology. CMS lacks the authority to collect data from drug manufacturers on coupon discounts to patients because the authority to collect information relating to ASP is based on manufacturer sales to purchasers. In addition, these data are proprietary and are not readily available from other sources. Standards for internal control in the federal government require agencies to have access to quality information to achieve their objectives, which for CMS entails having the information necessary to evaluate the implications coupon programs may have for Medicare’s methodology for setting Part B drug payment rates. Without data on coupon discounts, CMS lacks information that could inform its ongoing efforts to evaluate alternatives to this payment rate methodology. The high spending on Part B drugs based on ASP—approximately $20 billion in 2013—underscores the need to ensure that Medicare pays appropriately for these drugs. Various studies have noted previously that payments to providers under the current ASP-based payment methodology could lead providers to prescribe more costly drugs. Our findings in this report indicate that the shortcomings of this payment system go beyond problems with the incentives associated with payments to providers. In particular, even if Medicare Part B drug payments accurately reimburse providers’ costs and do not introduce inappropriate incentives, Medicare still may be paying more than necessary for drugs with coupon programs because the ASP for these drugs exceeds the effective market price that manufacturers ultimately received. Furthermore, upward trends in coupon program use and drug prices suggest that Medicare’s Part B drug payment rate methodology could become less suitable over time for drugs with coupon programs. These trends emphasize the need for regular monitoring of the implications that coupon programs may have for this methodology as CMS works to propose an alternative payment system. However, the agency lacks the authority to collect data on coupon discounts and therefore lacks important information that could inform its ongoing efforts to design and evaluate alternative approaches. To determine the suitability of Medicare’s Part B drug payment rate methodology for drugs with coupon programs, Congress should consider granting CMS the authority to collect data from drug manufacturers on coupon discounts for Part B drugs paid based on ASP and requiring the agency to periodically collect these data and report on the implications that coupon programs may have for this methodology. We provided a draft of this product to HHS. HHS provided us with technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Health and Human Services, and the Administrator of the Centers for Medicare & Medicaid Services. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Aflibercept injection (ophthalmic) Injection, IVIG Bivigam (C9130) Gammaplex injection (J1557) Gamunex-C/Gammaked (J1561) Octagam injection (J1568) Flebogamma injection (J1572) Zoledronic acid (J3487) Reclast injection (J3488) Zoledronic acid 1mg (Q2051) Medicare expenditures paid based on ASP, 2013 (dollars in millions) Drug description(s) Healthcare Common Procedure Coding System (HCPCS) code(s) Darbepoetin alfa, non- ESRD (J0881) Darbepoetin alfa, ESRD use (J0882) Leuprolide acetate /3.75 MG (J1950) Leuprolide acetate suspension (J9217) Xyntha inj (J7185) Factor vii recombinant (J7192) Hyalgan/supartz inj per dose (J7321) Euflexxa inj per dose (J7323) Brand name(s) Adriamycin, Doxil Doxorubicin hcl injection Drug description(s) Healthcare Common Procedure Coding System (HCPCS) code(s) Medicare expenditures paid based on ASP, 2013 (dollars in millions) (J9000) Doxil injection (J9002) Imported Lipodox inj (Q2049) Doxorubicin inj 10mg (Q2050) Apligraf skin sub (Q4101) Dermagraft skin sub (Q4106) Graftjacket skin sub (Q4107) Also includes Flebogamma 10% DIF, Flebogamma 5% DIF, Gamunex, Gamunex-C, and Octagam. Also includes Kogenate FS BIO-SET, Recombinate, ReFacto, and Xyntha. This appendix describes the data and methods we used in our study. To identify coupon programs associated with high-expenditure Medicare Part B drugs, we used 2013 Medicare claims data—the most recent full year of data available at the time we began our analysis (2015)—to develop a list of the 50 highest-expenditure Part B drugs paid based on the average sales price (ASP) methodology. We identified drugs based on their Healthcare Common Procedure Coding System (HCPCS) codes. Each HCPCS code refers to one or more brand or generic products, which are identified by their national drug codes (NDC). The drugs we identified for our analysis had multiple HCPCS codes if the codes shared one or more NDCs for products that were pharmaceutically equivalent, defined by the Food and Drug Administration as those with the same active ingredient(s), dosage form, route of administration, and strength or concentration. Our final list of the 50 highest-expenditure drugs accounted for 85 percent of Part B spending in 2013 for drugs paid based on ASP. (For the complete list of these 50 drugs, see app. I.) We identified which of the 50 high-expenditure Part B drugs had coupon programs, either at the time of our analysis (2015) or in 2013, based on information from manufacturers and their websites. If we were unable to identify a coupon program for a drug and did not receive information from its manufacturer, we recorded that the drug did not have a coupon program. Some drugs in our analysis comprised multiple NDCs. As a result, some drugs we analyzed had a coupon program for one product but did not have programs for other products, while other drugs had multiple coupon programs. To describe the extent to which privately insured patients used coupon programs, we obtained data from Truven Health Analytics’ MarketScan® Commercial Claims and Encounters Database on the estimated number of privately insured patients nationally who used drugs with coupon programs in 2013 (to correspond with the year of available Medicare claims data) and the out-of-pocket costs these patients incurred. We also obtained data for 2013 from drug manufacturers on coupon use— specifically, the number of patients who used each program and the average annual coupon discount provided. We calculated the percentage of patients taking a drug who used a coupon program by dividing the number of patients who used a coupon program by the estimated total number of patients who used the drug. To calculate the average of this percentage across all drugs in our analysis for which we obtained data on coupon use, we weighted each drug’s percentage by the total number of patients who used the drug. We calculated the total amount of coupon discounts provided in 2013 for each drug by multiplying the number of patients who used the program by the average discount provided. To calculate the average annual coupon discount across all drugs for which we obtained data on coupon use, we weighted the average annual discount for each drug by the number of patients who used the coupon program. In addition to data on coupon use, we collected information from manufacturers on the mechanisms through which manufacturers provide coupon discounts to patients and on the ways in which manufacturers inform patients and providers about drug coupon programs. The ASP for a drug in 2013, as calculated above, is equal to the average of the quarterly ASPs in 2013 reported to CMS by drug manufacturers, weighted by the units of the drug sold in a given quarter. Total sales net of manufacturer price concessions to purchasers, as defined by ASP. We then calculated the percentage by which ASP exceeded the effective market price in 2013 for each drug and across all drugs with coupon discount data in our analysis. To calculate this percentage across all drugs in the analysis, we weighted the percentage for each drug based on the drug’s Medicare spending from July 2013 through June 2014, which is the time period during which changes in ASP in 2013 would take effect. Finally, to estimate what Medicare spending from July 2013 through June 2014 could have been if a drug’s ASP accounted for coupon discounts, we multiplied the drug’s actual Medicare spending during this time by the percentage by which ASP in 2013 could have decreased if it had equaled the effective market price. We then calculated the difference between this spending estimate and actual Medicare spending for the same time period for each drug and across all drugs in our analysis with coupon discount data. Average annual out-of-pocket cost among patients with out-of-pocket costs (in 2015 dollars) Actemra Advate, Helixate FS, Kogenate FS, and various other brands Alimta Eligard, Lupron Depot, Lupron Depot-PED Faslodex Flebogamma, Gammaked, Gammaplex, and various other brands Gammagard Liquid 1,965 Values are based on 2013 data from Truven Health Analytics and have been adjusted for inflation to 2015 dollars using the Consumer Price Index for All Urban Consumers. In addition to the contact named above, William Black (Assistant Director), Ramsey Asaly, Namita Bhatia-Sabharwal, George Bogart, Muriel Brown, William A. Crafton, Kelsey Kennedy, Dan Lee, Maria Maguire, Lauren Metayer, and Beth Morrison made key contributions to this report. Medicare Part B: CMS Should Take Additional Steps to Verify Accuracy of Data Used to Set Payment Rates for Drugs. GAO-16-594. Washington, D.C.: July 1, 2016. Medicare Part B: Expenditures for New Drugs Concentrated among a Few Drugs, and Most Were Costly for Beneficiaries. GAO-16-12. Washington, D.C.: October 23, 2015. Medicare: Information on Highest-Expenditure Part B Drugs. GAO-13-739T. Washington, D.C.: June 28, 2013. Medicare: High-Expenditures Part B Drugs. GAO-13-46R. Washington, D.C.: October 12, 2012. Medicare Part B Drugs: CMS Data Source for Setting Payments Is Practical but Concerns Remain. GAO-06-971T. Washington, D.C.: July 13, 2006. | Use of drug coupons in the private sector has increased in recent years. GAO was asked to study coupon programs for drugs covered by Medicare Part B, including any implications for Part B spending. This report (1) identifies coupon programs associated with high-expenditure Part B drugs and describes the extent to which privately insured patients use coupons and (2) examines, for drugs with coupon programs, the suitability of the Part B drug payment rate methodology. GAO identified high-expenditure Part B drugs using 2013 Medicare claims data—the latest available at the time of the analysis—and collected information from manufacturers on coupon program characteristics in 2015. GAO also analyzed coupon use and patient costs for drugs using 2013 data from manufacturers and private insurers; estimated how Part B spending could have differed if ASP had accounted for coupon discounts in 2013; reviewed federal laws and regulations; and interviewed CMS officials. In 2015, manufacturers of 29 of the 50 high-expenditure Medicare Part B drugs GAO analyzed offered coupon programs, which reduce the costs patients incur for specific drugs. Part B drugs are typically administered by a physician. Coupon programs are prohibited in the Medicare program but are generally available to privately insured patients. GAO obtained data on coupon discounts for 18 drugs. GAO estimated that 19 percent of privately insured patients who received these drugs used coupons in 2013, but coupon use varied widely depending on the drug—from 1 percent to over 90 percent. Medicare's methodology for setting Part B payment rates to providers may be less suitable for drugs with coupon programs than for drugs without them. The methodology for most Part B drugs is based on the average sales price (ASP), which is defined by law as the amount physicians and other purchasers pay manufacturers for the drug, net of discounts and rebates to those purchasers. Medicare and its beneficiaries spent $20 billion on Part B drugs paid based on ASP in 2013. As ASP does not account for coupon discounts to patients, the discounts reduce the effective market price that manufacturers receive for drugs with coupon programs. GAO estimated that, for the 18 drugs for which it obtained coupon discount data, the ASP exceeded the effective market price by an estimated 0.7 percent in 2013. Part B spending for these drugs could have been an estimated $69 million lower if ASP equaled the effective market price. ASP exceeded the effective market price by more than 1.0 percent for 5 of the 18 drugs, suggesting that the ASP-based methodology may be even less suitable for these drugs. Upward trends in coupon program use and drug prices suggest that these programs could cause the methodology for setting Part B drug payment rates to become less suitable over time for drugs with coupon programs. However, the Centers for Medicare & Medicaid Services (CMS) lacks the authority to collect coupon discount data from manufacturers and thus lacks important information that could inform its ongoing efforts to evaluate alternatives to this methodology. To determine the suitability of the Part B drug payment rate methodology for drugs with coupon programs, Congress should consider (1) granting CMS authority to collect data from drug manufacturers on coupon discounts for Part B drugs paid based on ASP; and (2) requiring CMS to periodically collect these data and report on the implications of coupon programs for this methodology. The Department of Health and Human Services provided technical comments on a draft of this report, which GAO incorporated as appropriate. |
Before a drug can be marketed in the United States, its sponsor must demonstrate to FDA that the drug is safe and effective for its intended use. Because no drug is absolutely safe—there is always some risk of an adverse reaction—FDA approves a drug for marketing when the agency judges that its known benefits outweigh its known risks. After a drug is on the market, FDA continues to assess its risks and benefits. FDA reviews reports of adverse drug reactions (adverse events) related to the drug and information from studies about the drug, including clinical trials and studies following the use of drugs in ongoing medical care (observational studies), conducted by the drug’s sponsor, FDA, or other researchers. If FDA has information that a drug on the market may pose a significant health risk to consumers, it weighs the effect of the adverse events against the benefit of the drug to determine what actions, if any, are warranted. This decision-making process is complex and encompasses many factors, such as the medical importance and utility of the drug, the drug’s extent of usage, the severity of the disease being treated, the drug’s efficacy in treating this disease, and the availability of other drugs to treat the same disorder. CDER, the largest of FDA’s five centers, is the organizational entity within FDA that oversees the review of marketing applications for new drugs and the postmarket monitoring of drugs once they are marketed. Within CDER there are several key offices involved in activities related to postmarket drug safety. OND is the largest of the offices with fiscal year 2005 expenditures of $110.6 million and 715 staff. In fiscal year 2005, more than half of OND’s expenditures, or $57.2 million, came from PDUFA funds. OND’s staff evaluate new drugs for efficacy and safety to decide if a drug should be approved for marketing. OND also makes decisions about actions to take when there are postmarket safety issues with a drug (for example, revising the label to include adverse event information or having FDA withdraw approval for marketing). For safety questions, OND interacts with several FDA offices and divisions, but primarily with ODS. ODS is currently located within the Office of Pharmacoepidemiology and Statistical Science (OPaSS), which is organizationally parallel to OND and also contains the Office of Biostatistics. ODS is a much smaller office than OND, with fiscal year 2005 expenditures of $26.9 million and 106 staff. In fiscal year 2005, $7.6 million of ODS’s expenditures were from PDUFA funds. ODS staff evaluate and monitor drug risks and promote the safe use of drugs. While ODS is involved in both premarket and postmarket drug safety issues, its primary focus is on postmarket safety. An important part of the drug approval and postmarket monitoring process is the advice FDA receives from 16 human-drug-related scientific advisory committees, composed of experts and consumer representatives from outside FDA. Considered by FDA as important in helping the agency accomplish its mission and maintaining public trust, these advisory committees provide expert advice to the agency on a range of issues, including safety. The committees are largely organized according to specialized medical areas or conditions such as cardiovascular disease, gastrointestinal conditions, or oncology. In 2002, FDA established the Drug Safety and Risk Management Advisory Committee (DSaRM), 1 of the 16 human-drug-related scientific advisory committees, to specifically advise FDA on drug safety and risk management issues. The committee is composed of individuals from outside FDA with experience in the areas of medication errors, risk communication, risk perception, risk management, clinical trial methodology, evidence-based medicine, biometrics, and pharmacoepidemiology. Since it was established, DSaRM has met nine times, with four of those meetings held jointly with another drug-related scientific advisory committee. DSaRM members have also been asked to participate in other scientific advisory committees when safety issues were discussed. ODS sets the agenda for DSaRM meetings, whereas OND sets the agenda for the other scientific advisory committee meetings. Figure 1 describes the offices and external advisory committees involved in postmarket drug safety at FDA. In terms of postmarket drug safety surveillance, FDA has the authority to require that drug sponsors report adverse events to FDA with different reporting schedules based on the seriousness of the event and whether the event has been previously identified and is included in the drug’s label. Sponsors must report serious, unlabeled adverse events to FDA within 15 days of learning about them. Sponsors are required to report other adverse events quarterly for 3 years, then annually thereafter in the form of periodic adverse event reports. In addition, health care providers and patients can voluntarily submit adverse event reports to FDA through its MedWatch program. Adverse event reports become part of FDA’s computerized database known as the Adverse Event Reporting System (AERS). FDA has the authority to withdraw the approval of a drug on the market for safety-related and other reasons, although it rarely does so. Since 2000 there have been 10 drug withdrawals for safety reasons, and in all of these cases the drug’s sponsor voluntarily removed the drug from the market. FDA does not have explicit authority to require that drug sponsors take other safety actions; however, when FDA identifies a potential problem, sponsors generally negotiate with FDA to develop a mutually agreeable remedy to avoid other regulatory action. For example, if FDA determines that an approved drug may produce adverse events not previously identified, FDA and the sponsor may negotiate on revised labeling for the drug, and then FDA may issue an accompanying Public Health Advisory for patients and health care providers that describes the safety information. FDA may also request that the sponsor restrict the distribution of the drug in order to minimize a significant risk associated with the drug. FDA has limited authority to require that sponsors conduct postmarket safety studies; it may impose such a requirement during the premarket phase of drug development in two situations, and in one during the postmarket phase. In two situations, FDA has the authority to require that sponsors commit to conducting postmarketing studies as a condition of approval. First, FDA’s program for accelerated approval of new drugs for serious or life-threatening illnesses (referred to as “subpart H drugs”) allows FDA to more quickly approve drugs showing meaningful therapeutic benefit with the caveat that the sponsor will conduct or finish studies after the drug is marketed. Such drugs may be made available to the public sooner but with less complete safety information than the normal review process requires for approval. Second, in cases where human efficacy studies of a drug may not be ethical or feasible, FDA may rely on animal studies alone to approve the use of a drug and require postmarket studies as a condition of approval when studies on humans become feasible and ethical. For example, FDA approved a drug in 2003 that is used as a treatment for patients who have been exposed to a chemical nerve agent called Soman. Evidence of the effectiveness of this drug was obtained from animals alone because it is unethical to perform such studies in humans. In either situation, FDA may withdraw approval of these drugs if, for example, postmarket clinical studies fail to verify clinical benefits, the sponsor fails to perform postmarketing studies with due diligence, or postmarketing restrictions (for example, restricted distribution) are inadequate to assure safe use of the drug. Finally, under certain conditions, after a drug is approved, FDA can also require that drug sponsors conduct postmarket studies of marketed drugs when such studies are needed to provide adequate labeling to ensure the safe and effective use of these drugs in children. Two distinct FDA offices are involved in postmarket drug safety activities. While there is some overlap in their activities, they have different organizational characteristics and perspectives on postmarket drug safety. OND is involved in postmarket drug safety activities as one aspect of its larger responsibility to review new drug applications, and it has the decision-making responsibility for postmarket drug safety. ODS has a primary focus on postmarket drug safety and provides consultation to OND. ODS has been reorganized several times over the years, and there has been an absence of stable leadership. FDA’s postmarket drug safety decision-making process is complex, involving iterative interactions between OND and ODS. Since OND is responsible for approving or disapproving drug applications, its staff are involved in safety activities throughout the life cycle of a drug (that is, premarket and postmarket), and it has the ultimate responsibility to take regulatory action concerning the postmarket safety of drugs. OND is organized into six offices that evaluate drugs and drug products, and within these offices are 17 review divisions organized by medical specialty (for example, oncology or dermatology). OND’s staff includes physicians, pharmacologists, toxicologists, and microbiologists. The key decision makers in OND—division directors and office directors—are physicians. In general, OND staff take a clinical perspective in their work. According to the Director of the office, OND’s medical staff have expertise in medical specialties as well as drug regulation, which he said gave them the ability to integrate issues related to the disease, available therapy, effectiveness of the drug, and relative safety. He also told us that OND staff are focused on meeting patient needs and providing health care practitioners and patients with a range of drugs for treatment of a specific disease or condition. Finally, an important characteristic of OND’s organization is that OND’s work and its pace are driven in part by PDUFA goals to complete its review of drug applications within certain time frames. FDA estimates that 51 percent of OND’s work time is devoted to drug safety, either premarket or postmarket. In the drug development or premarket phase, OND staff review safety and efficacy data from sponsors’ animal studies and human clinical trials to decide whether or not to approve a drug. In some cases OND identifies safety concerns at the time of approval that it believes can be managed, for example, by educating patients and providers or restricting distribution to certain populations. In these cases, OND works with ODS and the sponsor to develop a risk management plan to outline these strategies. OND may also request, or in cases where FDA has the authority, require that a sponsor conduct a postmarketing study as a condition of approval. After a drug is on the market, OND receives information about safety issues related to a drug’s use and takes appropriate regulatory action. OND receives information about safety issues in several ways. First, OND staff receive notification of adverse event reports for drugs to which they are assigned and they review the periodic adverse event reports that are submitted by drug sponsors. Second, OND staff review safety information that is submitted to FDA when a sponsor seeks approval for a new use or formulation of a drug, and monitor completion of postmarket studies. OND also partners with ODS and other CDER offices for information and analysis to help it make postmarket drug safety decisions. When considering postmarket drug safety issues, OND staff use evidence found in clinical trials. For example, one OND manager told us that OND staff typically review adverse event data related to a drug, obtain a consult from ODS, and then review any clinical trial data. Then, if necessary, OND makes a decision about what action should be taken, which may include negotiating with a sponsor to change a drug’s label, restricting its distribution, or proposing to withdraw the drug’s approval. ODS serves primarily as a consultant to OND and has an overall goal of reducing preventable deaths and injuries associated with the use of drugs with a primary focus on postmarket drug safety. ODS also provides consultation to OND on premarket safety issues, including risk management issues. Although FDA’s postmarket drug safety office has been reorganized several times over the years, the consultant role of the office has remained consistent. ODS was formed in 2002 when FDA combined the Office of Postmarketing Drug Risk Assessment with the MedWatch program (from the Office of Training and Communications) and with patient labeling and risk communication functions (from the Division of Drug Marketing, Advertising, and Communications). ODS was established within the new Office of Pharmacoepidemiology and Statistical Science (OPaSS). OPaSS was made equivalent to OND within the CDER organizational structure. ODS is composed of a small management team and three divisions. According to the ODS Director, the management team consists of the director, deputy director, an associate director for regulatory affairs, and an associate director for science and medicine. ODS’s three divisions are: the Division of Drug Risk Evaluation (DDRE), the Division of Surveillance, Research, and Communication Support, and the Division of Medication Errors and Technical Support. The Division of Surveillance, Research, and Communication Support is involved with the acquisition and analysis of data related to drug safety. This division also reviews consumer-oriented materials for content and patient-friendly language, such as medication guides, which are dispensed with drugs that have serious safety concerns. This division also disseminates safety information to the medical community and general public through the MedWatch Web site. The Division of Medication Errors and Technical Support is responsible for conducting premarketing reviews of all proprietary names and labels of drugs in order to minimize medication errors due to similar names or confusion related to the labeling and packaging of drugs. This division also provides postmarketing review and analysis of medication errors. ODS’s DDRE is the primary unit responsible for postmarket drug safety surveillance. Its staff of 47 include safety evaluators, who are generally pharmacists, and epidemiologists, with many having either a Ph.D. in epidemiology or an M.D. with epidemiologic training. The division’s safety evaluators are assigned to cover specific groups or classes of marketed drugs. They primarily review reports of individual adverse events from AERS in order to detect safety signals. The division’s epidemiologists work collaboratively with the safety evaluators, using population-level data to analyze potential safety signals and put them into context. They also review the published literature and conduct research through the use of contracts and other agreements with researchers outside of government, health care utilization databases, and surveillance systems. Finally, safety evaluators and epidemiologists interact with international colleagues on drug safety issues. ODS operates primarily in a consultant capacity to OND and does not have any independent decision-making responsibility. When there is a safety concern, ODS staff conduct an analysis and produce a written report for OND called a consult. Safety consults conducted by DDRE staff include analyses of adverse event reports and assessments of postmarket study designs and risk management plans. In fiscal year 2004, DDRE completed approximately 600 safety consults. A majority of DDRE’s consults are requested by OND. In fiscal year 2004, 71 percent of DDRE’s consults were requested by OND; 22 percent were requested by other sources; and 7 percent were self-initiated by DDRE. Over time, the proportion of DDRE- initiated consults has declined while the proportion of OND-requested consults has increased. In general, ODS staff take a population-based perspective in their work, which ODS staff we spoke with contrasted with the clinical perspective of OND. They look at how a drug is being used in the general population and its side effects, and they base their safety analyses on adverse event reports, observational studies, and other population-based data sources. ODS staff do not typically use clinical trial data for their safety analyses and conclusions. In their postmarket work, ODS staff also do not operate under PDUFA drug review goals and therefore do not face the same kinds of deadlines that OND staff face. Furthermore, ODS staff have sometimes taken an academic research approach to safety work, for example, publishing case reports about adverse events or safety analyses in peer- reviewed journals. There has been high turnover of ODS directors—there have been eight different directors of the office and its various predecessors—in the past 10 years. Four of the directors have been “acting” directors, not permanent ones. From February to September 2002 and again from October 2003 to January 2005, the Director of OPaSS also served as the Acting Director of ODS. The Director of CDER, as well as staff within and outside of ODS, told us that the lack of consistent leadership of ODS has had a negative effect on the work and morale of staff. One ODS staff member told us that since drug safety issues often take a fair amount of time to resolve, it is important to have consistency in leadership so that the leaders are knowledgeable of ongoing issues. In October 2005 FDA appointed a permanent director of ODS from within the organization, the first permanent director since October 2003. The decision-making process for postmarket drug safety is complex, involving input from a variety of FDA staff and organizational units and information sources, but the central focus of the process is the iterative interaction between OND and ODS. As we have described, ODS safety consults can be initiated within ODS or requested by OND, but typically OND requests them. OND often requests an analysis because of information it receives from the drug’s sponsor about a safety concern. ODS safety evaluators then search AERS for all relevant cases and develop a summary of individual cases from the reports. The safety evaluators assess the cases to determine whether the adverse events are drug-related and whether there are any common trends or risk factors. ODS epidemiologists sometimes collaborate with the safety evaluators by estimating how frequently an adverse event occurs among the population exposed to a particular drug, and they compare this estimate with how frequently the same event occurs in a population not treated by the drug. The epidemiologists also might use information from observational studies and drug use analyses to analyze the safety issue. When completed, ODS staff summarize their analysis in a written consult. The ODS division director of the staff who worked on the consult typically reviews the consult and either signs it, indicating agreement, or writes a memorandum explaining what part he or she disagrees with and why. According to FDA officials, OND staff within the review divisions usually decide what regulatory action should occur, if any, by considering the results of the safety analysis in the context of other factors such as the availability of other similar drugs and the severity of the condition the drug is designed to treat. Several CDER staff, including OND and ODS staff, that we interviewed told us that most of the time there is agreement within FDA about what safety actions should be taken. At other times, however, OND and ODS disagree about whether the postmarket data are adequate to establish the existence of a safety problem or support a recommended regulatory action. In those cases, sometimes OND requests additional analyses by ODS and sometimes there is involvement from other FDA organizations. In some cases, OND seeks the advice of FDA’s scientific advisory committees, including DSaRM, for decisions about postmarket drug safety. The recommendations of the advisory committees do not bind the agency to any decision. According to FDA officials, if a decision is made by OND that a safety action is warranted, then OND staff generally work with the drug’s sponsor to implement it. There was sometimes a lack of consensus in our drug case studies, and we observed that ODS often performed a series of related analyses about the same safety concerns for OND over a significant period of time. As an illustration of this iterative decision-making process, OND requested in 2002 that ODS analyze cases of serious skin reactions associated with the pain reliever Bextra after the drug’s sponsor had communicated with OND about this potential risk. ODS staff searched the AERS database and found several related cases for review. They estimated the occurrence of reported cases of serious skin reactions among Bextra users by using the cases and drug utilization data. On the basis of their analysis, ODS recommended that Bextra’s label be updated to include this risk, and OND followed the recommendation by working with the sponsor to update the label in 2002. Between 2002 and 2004, ODS staff conducted five other analyses of the occurrence of serious skin reactions associated with Bextra, including two that were requested by OND. In March 2004, ODS staff recommended that Bextra carry a boxed warning about its risks of serious skin reactions. The ODS staff based their recommendation on their finding that Bextra’s risk for serious skin reactions was 8 to 13 times higher than that for other similar drugs and 20 times higher than the incidence rate in the population. The ODS Division Directors who reviewed the analysis and recommendation agreed, but the OND review division responsible for Bextra did not initially agree. About 5 months later, the OND review division decided a boxed warning was warranted, after ODS performed another analysis requested by OND, comparing Bextra’s risk with several other similar drugs, including Mobic. ODS found no reported cases of serious skin reactions associated with Mobic. In 2005, a joint meeting of FDA’s Arthritis Advisory Committee and DSaRM was held to discuss the postmarket safety of several anti-inflammatory drugs including Bextra, with a focus on their cardiovascular risks. The committees recommended, after presentations by FDA staff and others, that Bextra should remain on the market. A few months later, FDA asked the sponsor to withdraw the drug from the market because, in part, its risk for serious skin reactions appeared to be greater than for other similar anti-inflammatory drugs. FDA’s postmarket drug safety decision-making process has been limited by a lack of clarity, insufficient oversight by management, and data constraints. We observed that there is a lack of established criteria for determining what safety actions to take and when. Aspects of ODS’s role in the process are unclear, including its role in participating in scientific advisory committee meetings organized by OND. A lack of communication between ODS and OND’s review divisions and limited oversight of postmarket drug safety issues by ODS management has hindered the decision-making process. FDA relies primarily on three types of data sources—adverse event reports, clinical trial studies, and observational studies—in its postmarket decision making. Each data source has weaknesses, however. FDA also faces constraints in requiring certain studies and obtaining data. While acknowledging the complexity of the postmarket drug safety decision-making process, we observed in our interviews with OND and ODS staff and in our case studies that the process lacked clarity about how drug safety decisions are made and about the role of ODS. If FDA had established criteria for certain postmarket drug safety decisions, then some of the disagreements we observed in our case studies could have possibly been resolved more quickly. For example, in the case of Bextra, as described earlier, ODS and OND staff disagreed about whether the degree of risk warranted a boxed warning, the most serious warning placed in the labeling of a prescription medication. As another example, there were differing opinions over taking stronger actions against Propulsid, the nighttime heartburn medication which was associated with cardiovascular side effects, or whether to modify the label. Between 1995 and 1997, Propulsid’s label had been modified, including the addition of a boxed warning, to warn consumers and professionals about the cardiovascular side effects of the drug. In June 1997 a task force within FDA, including OND and ODS staff, was convened to further evaluate the efficacy and safety of Propulsid. FDA staff, including task force members, later met to discuss several regulatory options, including proposing further label modifications, presenting the agency’s concerns to an advisory committee, and proposing to withdraw approval of Propulsid. According to a former OND manager, as a result of this meeting, FDA decided to seek further label modifications. Some staff, from both OND and ODS, however, supported stronger actions at this time, including proceeding with proposing a withdrawal of approval. According to several FDA officials, in the absence of established criteria, decisions about safety actions are often based on the case-by-case judgments of the individuals reviewing the data. Our observations are consistent with previous FDA reviews. In 2000, two internal CDER reports based on interviews that FDA conducted with staff indicated that an absence of established criteria for determining what safety actions to take, and when, posed a challenge for making postmarket drug safety decisions. The reports recognized the need to establish criteria to help guide such decisions. In a review of the safety issues concerning Propulsid, CDER staff recommended that a standardized approach to postmarket drug safety issues be established, by addressing various issues such as how to determine when to incorporate safety issues into labeling and when stronger actions should supersede further labeling changes. According to the report, several staff noted frustration with the numerous changes made to Propulsid’s label that were mostly ineffective in reducing the number of cardiovascular adverse events. Similarly, after the diabetes drug Rezulin was removed from the market in 2000 because of its risk for liver toxicity, a CDER report focused on Rezulin also recommended that a consistent approach to postmarket drug safety be developed, including what regulatory actions should occur to address postmarket drug safety concerns, and when they should occur. In addition to a lack of criteria for safety actions, we observed a lack of clarity related to ODS’s recommendations. In practice, ODS often makes written recommendations about safety actions to OND but there is some confusion over this role, according to several ODS managers, and there is no policy that explicitly states whether ODS’s role includes this responsibility. The case of Arava illustrates this confusion. In 2002, the OND review division responsible for Arava, a drug used to treat rheumatoid arthritis, requested that ODS review postmarket data for cases of serious liver toxicity associated with its use. The ODS staff who worked on this analysis recommended that Arava be withdrawn from the market because they concluded that the risk for serious liver toxicity exceeded its benefits. The OND Division Director responsible for Arava felt that ODS should not have included a recommendation in its consult because he argued that this was the responsibility of OND, not ODS. Some of the confusion may be the result of ODS’s evolving role in postmarket drug safety. A current and a former ODS manager told us that in the past, ODS’s safety consults were technical documents summarizing adverse events with minimal data analysis and few recommendations. Over time the consults have become more detailed with sophisticated data analyses and more recommendations about what safety action is needed (for example, label change, medication guide, drug withdrawal). ODS’s role in scientific advisory committee meetings is also unclear. According to the OND Director, OND is responsible for setting the agenda for the advisory committee meetings, with the exception of DSaRM. This includes who is to present and what issues will be discussed by the advisory committees. For the advisory committees (other than DSaRM) it is unclear when ODS staff will participate. While ODS staff have presented their postmarket drug safety analyses during some advisory committee meetings, our case study of Arava, and another case involving antidepressant drugs, provide examples of the exclusion of ODS staff. For example, in March 2003, the Arthritis Advisory Committee met to review the efficacy of Arava, and its safety in the context of all available drugs to treat rheumatoid arthritis. The OND review division responsible for Arava presented its own analysis of postmarket drug safety data at the meeting, but did not allow the ODS staff—who had recommended that Arava be removed from the market—to present their analysis because it felt that ODS’s review did not have scientific merit. Specifically, the OND review division felt that some of the cases in the ODS review did not meet the definition of acute liver failure, the safety issue on which the review was focused. The OND division also believed that in some of the cases ODS staff inappropriately concluded that liver failure resulted from exposure to Arava. After the meeting, ODS epidemiologists and safety evaluators asked the ODS and OPaSS Directors to clarify ODS’s role involving postmarket drug safety issues, including its role at advisory committee meetings. According to an FDA official, there was no written response to this request. As another example of ODS’s unclear role in scientific advisory committees, in February 2004 an ODS epidemiologist was not allowed to present his analysis of safety data at a joint meeting of the Psychopharmacologic Drugs Advisory Committee and the Pediatric Subcommittee of the Anti-Infective Drugs Advisory Committee that was held to discuss reports of suicidal thoughts and actions in children with major depressive disorder during clinical trials for various antidepressant drugs. According to statements by FDA officials at a congressional hearing, OND believed that the ODS staff member’s analysis, which showed a relationship between the use of antidepressants and suicidal thoughts and behaviors in children, was too preliminary to be presented in detail. The analysis was based on pediatric clinical trial data that FDA requested from the sponsors of several antidepressant drugs. FDA had asked the sponsors to identify suicide-related events using specific methods, and then ODS was asked to analyze all of the submitted data. OND later decided that the sponsors may have been inconsistent in their classification approaches and asked outside experts to perform additional reviews of all the cases by rating whether particular events could be classified as suicidal. The staff member who performed the ODS review, however, believed that the available data were sufficient to conclude a relationship between the use of antidepressants and suicidal thoughts and behaviors in pediatrics and to recommend further safety actions. In his consult, the ODS staff member also concluded that while additional analyses would yield valuable information, they would also take several more months to complete. In light of this delay, he recommended an interim plan to discourage the use of all but one antidepressant in the treatment of pediatric major depressive disorders. In December 2004, ODS epidemiologists communicated to the CDER Director their position that ODS’s role should include the responsibility of presenting all relevant ODS data at advisory committee meetings. According to an FDA official, there was no written response to this request. However, in our interviews, the Directors of CDER and OND told us that in retrospect they felt it was a mistake for FDA to have restricted the ODS epidemiologist from presenting his safety information at the meeting. Several ODS managers that we interviewed told us that there is also a lack of clarity regarding the role of the epidemiologist in postmarket drug safety work. Despite the fact that ODS’s epidemiologists have some defined responsibilities, there appears to be some confusion about the scope of their activities and a lack of understanding on the part of OND about their role and capabilities. A prior review of postmarket drug safety identified similar issues. For example, in that review some epidemiologists indicated that they should be able to maintain an independent approach to their research and the publication of their research. However, some OND review division directors indicated that the work of the epidemiologists should be considered within the context of CDER’s overall regulatory mission. Further, the epidemiologists’ research conclusions do not necessarily reflect the conclusions of FDA but may be perceived as such by the medical community. ODS managers indicated that a current challenge for FDA is to determine how it should use its epidemiologists and what their work products should be. According to the current ODS Director, efforts are needed to help OND better understand what epidemiologists can do. The epidemiologists themselves have asked for greater clarity about their role and a stronger voice in decision making. A lack of communication between ODS and OND’s review divisions and limited oversight of postmarket drug safety issues by ODS management have also hindered the decision-making process. The frequency and extent of communication between ODS and OND’s divisions on postmarket drug safety vary. ODS and OND staff often described their relationship with each other as generally collaborative, with effective communication. But both ODS and OND staff said sometimes there were communication problems, and this has been an ongoing concern. For example, according to some current and former ODS staff, OND does not always adequately communicate the key question or point of interest to ODS when it requests a consult, and as ODS works on the consult there is sometimes little interaction between the two offices. After a consult is completed and sent to OND, ODS staff reported that OND sometimes does not respond in a timely manner or at all. Several ODS staff characterized this as consults falling into a “black hole” or “abyss.” OND’s Director told us that OND staff probably do not “close the loop” in responding to ODS’s consults, which includes explaining why certain ODS recommendations are not followed. In some cases CDER managers and OND staff criticized the methods used in ODS consults and told us that the consults were too lengthy and academic. ODS management has not effectively overseen postmarket drug safety issues, and as a result, it is unclear how FDA can know that important safety concerns have been addressed and resolved in a timely manner. According to a former ODS Director, the small size of ODS’s management team has presented a challenge for effective oversight of postmarket drug safety issues. Another problem is the lack of systematic information on drug safety issues. According to the ODS Director, ODS currently maintains a database of consults that can provide certain types of information such as the total count, the types of consults that ODS staff conducted, and the ODS staff that wrote the consults. But it does not include information about whether ODS staff have made recommendations for safety actions and how the safety issues were handled and resolved, including whether recommended safety actions were implemented by OND. For example, ODS was unable to provide us with a summary of the recommendations for safety actions that its staff made in 2004 because it was not tracking such information. Data constraints—such as weaknesses in data sources and limitations in requiring certain studies and obtaining data—contribute to FDA’s difficulty in making postmarket drug safety decisions. OND and ODS use three different sources of data to make postmarket drug safety decisions. They include adverse event reports, clinical trial studies, and observational studies. While data from each source have weaknesses that contribute to the difficulty in making postmarket drug safety decisions, evidence from more than one source can help inform the postmarket decision-making process. The availability of these data sources is constrained, however, because of FDA’s limited authority to require drug sponsors to conduct postmarket studies and its resources. While decisions about postmarket drug safety are often based on adverse event reports, FDA cannot establish the true frequency of adverse events in the population with AERS data. The inability to calculate the true frequency makes it hard to establish the magnitude of a safety problem, and it makes comparisons of risks across similar drugs difficult. In addition, it can be difficult to attribute adverse events to particular drugs when there is a relatively high incidence rate in the population for the medical condition. For example, ODS staff analyzed adverse event reports of serious cardiovascular events among users of the anti- inflammatory drug Vioxx in a 2001 consult. However, because Vioxx was used to treat arthritis, which occurs more frequently among older adults, and because of the relatively high rate of cardiovascular events among the elderly, ODS staff concluded that the postmarket data available at that time were not sufficient to establish that Vioxx was causally related to serious cardiovascular adverse events. With AERS data it is also difficult to attribute adverse events to the use of particular drugs because the AERS reports may be confounded by other factors, such as other drug exposures. For example, one AERS report described a patient who developed cardiac arrest after he was given the drug hyaluronidase with two local anesthetics in preparation for cataract surgery. Because local anesthetics can lead to cardiac events, the ODS safety evaluator who reviewed this case concluded that the causal role of hyaluronidase alone could not be established. FDA may also use data from clinical trials and observational studies to support postmarket drug safety decisions, but each source has weaknesses that constrain the usefulness of the data provided. Clinical trials, in particular randomized clinical trials, are considered the “gold standard” for assessing evidence about efficacy and safety because they are considered the strongest method by which one can determine whether new drugs work. However, clinical trials also have weaknesses. Clinical trials typically have too few enrolled patients to detect serious adverse events associated with a drug that occur relatively infrequently in the population being studied. They are usually carried out on homogenous populations of patients that often do not reflect the types of patients who will actually take the drugs, including those who have other medical problems or take other medications. In addition, clinical trials are often too short in duration to identify adverse events that may occur only after long use of the drug. This is particularly important for drugs used to treat chronic conditions where patients are taking the medications for the long term. Observational studies, which use data obtained from population- based sources, can provide FDA with information about the population effect and risk associated with the use of a particular drug. Because they are not controlled experiments, however, there is the possibility that the results can be biased or confounded by other factors. Despite the weaknesses of clinical trials and observational studies, evidence from both types of studies helps inform FDA’s postmarket drug safety decision-making process. For example, clinical trials conducted by drug sponsors for their own purposes sometimes provide information for FDA’s evaluation of postmarket drug safety issues. For instance, drug sponsors sometimes conduct clinical trials for drugs already marketed in order to seek approval for a new or expanded use. These studies may also be conducted to support claims about the additional benefits of a drug, and their results sometimes reveal safety information about a marketed drug. For example, to support the addition of a claim for the lower risk of gastrointestinal outcomes (such as ulcers and bleeding), Vioxx’s sponsor conducted a clinical trial that found a greater number of heart attacks in patients taking Vioxx compared with another anti- inflammatory drug, naproxen. This safety information was later added to Vioxx’s labeling. In addition to relying on sponsors, ODS partners with researchers outside of FDA to conduct postmarket observational studies through cooperative agreements and contracts. For example, several cooperative agreements supported a study of Propulsid using population- based databases from two managed care organizations and one state Medicaid program, before and after warnings on contraindications were added to the drug’s label in 1998. The cooperative agreement researchers, which included ODS staff, measured the prevalence of contraindicated use of Propulsid, and found that a 1998 labeling change warning about the contraindication did not significantly decrease the percentage of users who should not have been prescribed this drug. FDA’s access to postmarket clinical trial and observational data, however, is limited by its authority and available resources. As described previously, FDA does not have broad authority to require that a drug sponsor conduct an observational study or clinical trial for the purpose of investigating a specific postmarket safety concern. One senior FDA official and several outside drug safety experts told us that FDA needs greater authority to require such studies. Long-term clinical trials may be needed to answer safety questions about risks associated with the long-term use of drugs, such as those that are widely used to treat chronic conditions. For example, during a February 2005 scientific advisory committee meeting, some FDA staff and members of the Arthritis Advisory Committee and DSaRM indicated that there was a need for better information on the long- term use of anti-inflammatory drugs and discussed how a long-term trial might be designed to study the cardiovascular risks associated with the use of these drugs. As another example, FDA approved Protopic and Elidel, both eczema creams, in December 2000 and December 2001, respectively. Since their approval, FDA has received reports of lymphoma and skin cancer in children and adults treated with these creams. In March 2005, FDA announced that it would require label changes for the creams, including a boxed warning about the potential cancer risk. An ODS epidemiologist told us that FDA has been trying for several years to get the sponsor to do long-term studies of these drugs, but that it has been difficult to negotiate. In the absence of specific authority, FDA often relies on drug sponsors voluntarily agreeing to conduct such postmarket studies. But the postmarket studies that drug sponsors agree to conduct have not consistently been completed. For example, one study estimated that the completion rate of postmarket studies, including those that sponsors have voluntarily agreed to conduct, rose from 17 percent in the mid-1980s to 24 percent between 1991 and 2003. FDA has little leverage to ensure that these studies are carried out, for example, by imposing administrative penalties. In terms of resource limitations, several FDA staff (including CDER managers) and outside drug safety experts told us that in the past ODS has not had enough resources for cooperative agreements to support its postmarket drug surveillance program. Annual funding for this program was less than $1 million from fiscal year 2002 through fiscal year 2005. In October 2005 FDA awarded four contracts to replace the cooperative agreements, and FDA announced that these contracts would allow FDA to more quickly access population-level data and a wider range of data sources. The total amount of the contracts, awarded from 2005 to 2010, is about $5.4 million, which averages about $1.1 million per year, a slight increase from fiscal year 2005 funding. The new contracts will provide access to data from a variety of health care settings including health maintenance organizations, preferred provider organizations, and state Medicaid programs. According to an FDA official, FDA does not conduct its own clinical trials because of the high cost associated with carrying out such studies and because FDA does not have the infrastructure needed to conduct them. It was recently estimated that clinical trials designed to study long-term drug safety could cost between $3 million and $7 million per trial. The estimated cost of just one such trial would exceed the amount FDA has currently allocated ($1.1 million) for its contracts with researchers outside of FDA. FDA has undertaken several initiatives to improve the postmarket drug safety decision-making process, but these are unlikely to address all the gaps. FDA’s newly created Drug Safety Oversight Board (DSB) may help provide oversight of important, high-level safety decisions, but it does not address the need for systematic tracking of ongoing safety issues. Other initiatives, such as FDA’s draft policy on major postmarket drug safety decisions and communication initiatives may help improve the clarity and effectiveness of the process, but they have not been fully implemented. FDA’s dispute resolution processes to help resolve disagreements over safety decisions have not been used and may not be viewed as sufficiently independent. FDA is taking steps to identify additional data sources for postmarket drug safety studies, and expects to use additional funds for this purpose, but FDA still faces data constraints. FDA’s DSB, created in the spring of 2005, may help provide oversight of important, high-level safety decisions within CDER; however, there is still a need for systematic tracking of ongoing safety issues. FDA established the DSB to help provide independent oversight and advice to the CDER Director on the management of important safety issues. The DSB reports directly to the head of CDER and consists primarily of FDA officials from within CDER and other FDA centers. According to an FDA policy document, the DSB includes 11 voting members from CDER, with 3 representatives from ODS and 3 from OND. Currently the OND and ODS Directors are voting members. It also includes representatives from other federal agencies. DSB members who conducted the primary preapproval review of the drug or who were involved with a drug’s approval or postmarket safety review will not be allowed to vote on issues concerning that drug. As of February 2006, the DSB was meeting regularly and an FDA official told us that it is expected to meet monthly. The meetings are not open to the public, but FDA posts abbreviated summaries of the meeting minutes on its Web site. According to an FDA policy document, the DSB will identify, track, and oversee the management of important drug safety issues. Important drug safety issues include serious side effects identified after a drug’s approval that have the potential to significantly alter the drug’s benefit-to-risk analysis or significantly affect physicians’ prescribing decisions. According to an FDA official, ODS and OND submit monthly reports of safety issues for discussion by the DSB to be used in setting the agenda for the meetings. In addition, at any time individuals within and outside of FDA can submit issues to be considered by contacting a DSB member or the executive director. The FDA official said that the DSB will not be involved in the ongoing process of postmarket surveillance and decision making about drug safety issues, but rather will be involved with ensuring that broader safety issues—such as ongoing delays in changing a label—are effectively resolved. The DSB may also develop standards for certain kinds of safety-related actions, such as when a drug warrants a boxed warning or a medication guide. The FDA official acknowledged that safety-related decisions are still based on individual judgments and lack consistency. The DSB has plans to form subcommittees to look at policy development in this and other areas. The DSB may help provide high-level oversight of safety issues, but it does not address the problem of the lack of systematic tracking of safety issues and their resolution. Information about the resolution of safety issues identified by ODS staff is still not available to ODS management nor to the DSB. FDA’s draft policy on major postmarket drug safety decision making and other process and organizational initiatives may make the process clearer and more effective, but these efforts have not been fully implemented. Several years ago, FDA drafted a policy entitled “Process for Decision- Making Regarding Major Postmarketing Safety-Related Actions” that could help improve the decision-making process, but as of February 2006, this policy has not been finalized and implemented. The draft policy was designed to ensure that all major postmarket safety recommendations, such as the market withdrawal of a drug, would be discussed by involved CDER managers, starting at the division level. The draft policy states that CDER staff, including ODS staff, are to write a detailed memorandum describing their recommendation for a major safety action. If the immediate supervisor disagrees, he or she prepares a memorandum explaining the nature of the differences, and then the division director prepares a memorandum indicating how the issue should be resolved. In some cases the supervisor and division director may be the same person. A Division Consensus Meeting is to be convened for every recommendation regardless of whether there is initial agreement between the staff member making the recommendation and the supervisor and division director. The process stops at the division level if a decision is reached that a major safety action is not needed. Otherwise, the recommendation is discussed at higher levels of management in CDER. An Office Action Meeting would then be held to recommend a course of action to the CDER director, although it is possible that there still could be disagreement at the office level. A final meeting, called the Decisional Meeting, would then be held to decide a course of action, and would include the CDER director as well as office- and division- level staff. It is not clear how the new DSB will be integrated into the draft policy on major postmarket drug safety decision making, and FDA officials told us they are still trying to determine how to do this. Other initiatives may improve the decision-making process, but these efforts have not been fully implemented. For example, ODS has established a Process Improvement Team to assess the safety consult process, including how OND asks questions about postmarket safety concerns and how ODS should answer the questions. OND has established a similar team to assess the overall process for reviewing postmarket safety information, including the consult process. Both teams plan to make recommendations; for example, the OND representative chairing the OND team told us the OND team plans to recommend which office (OND or ODS) should have responsibility for certain postmarket tasks, such as reviewing periodic adverse event reports. According to the OND chair, the OND team expects to finalize its recommendations by the end of March 2006. According to the ODS Director, the ODS team’s work was still in progress as of January 2006 and would not be completed for about 6 months. In February 2006, ODS established a new Process Improvement Team to identify best practices for safety evaluators in order to make sure there is standardization of their work (for example, reviewing of adverse event reports). The ODS Director estimated that the work of this team would be completed in 3 to 4 months. FDA officials told us that they have proposed reorganizing CDER to dissolve OPaSS and have the director of ODS report to the CDER director. FDA plans to implement this reorganization in May 2006. In the meantime, ODS has taken some other steps to improve communication and oversight of safety issues. According to the ODS Director, the DDRE Director recently instituted regular meetings between the safety evaluators in his division and the OND review divisions in order to discuss drug safety issues, including ongoing consults, issues that DDRE staff have not yet provided consultation on, and how safety issues have been resolved. According to the DDRE Director, over half of OND’s review divisions have participated in these regular meetings to date. The Director of ODS also acknowledged that ODS needs to have a better way to track safety issues as they are emerging. He told us that ODS is developing a tracking system that is currently being tested and is expected to become operational in 2006. The Director also said he had plans to build up the immediate office of ODS by adding an associate director of operations and staff responsible for working on relationships with other federal agencies (for example, National Institutes of Health) and contractors. He has decided to hold regular meetings with the ODS deputy director and division directors for the specific purpose of discussing the status of drug safety problems. Despite the efforts that FDA has made to improve its postmarket drug safety decision-making process, the role of ODS in advisory committee meetings (other than DSaRM) has not been clarified. The role of ODS in scientific advisory committee meetings is not discussed in the draft policy on major postmarket drug safety decisions or in other policy documents. In addition, according to the ODS Director, the role of epidemiologists in ODS requires further clarification. A Process Improvement Team that was formed to address this issue was suspended, and the ODS Director said that other ways to approach this issue are being evaluated. The DSB and a pilot program have not been used as of February 2006 to help resolve organizational and individual disagreements that occur within CDER over safety decisions and may not be viewed as sufficiently independent. According to an FDA policy document, the DSB will resolve organizational disputes over approaches to drug safety. According to an FDA official, as of February 2006, however, the DSB had not handled any such formal disputes. An FDA official told us that, as an example, ODS might believe that a drug should come off the market but OND does not agree, and resolving this matter could be handled by the DSB. Although DSB members who were involved with a drug product’s approval or safety review will be recused from the DSB’s decision-making process concerning that drug, the current DSB membership includes CDER managers who oversee the drug approval and safety review processes, which may limit the ability of the DSB to provide neutral, independent advice in the handling of organizational disputes. In addition, decisions made by the DSB will serve as recommendations to the CDER director, who is the final decision maker. This reporting chain may further limit the independence of the DSB since the CDER director manages the overall drug approval and safety review processes. In addition to the DSB, a pilot program for dispute resolution procedures has not been used by CDER staff as of February 2006. In November 2004 FDA implemented a pilot program for dispute resolution that is designed for individual CDER staff to have their views heard when they disagree with a decision that could have a significant negative effect on public health, such as a proposed safety action or the failure to take a safety action. Any CDER employee can initiate the process, but the CDER ombudsman, in consultation with the CDER director, determines whether a dispute warrants formal review. If the CDER director and ombudsman decide to proceed, the CDER director would establish a panel of three or four members, one of which the CDER employee initiating the process would nominate. The panel would review the case and make a recommendation to the CDER director, who would then decide how the dispute should be resolved. Like the DSB, the pilot program also does not offer employees an independent forum for resolving disputes. The CDER director decides whether the process should be initiated, appoints the chair of the panel, and is the final adjudicator. FDA is taking steps to identify additional data sources that it may obtain with its current authority and resources. In fiscal year 2006, FDA expects to use $10 million for this purpose consistent with direction in the Conference Report accompanying FDA’s fiscal year 2006 appropriation. The Conference Report specified that a $10 million increase over the prior year was provided for drug safety activities, including $5 million for ODS and $5 million for drug safety activities within CDER. The conferees intended for the increases to be used for FDA’s highest-priority drug safety needs that were not funded in fiscal year 2005, such as acquiring access to additional databases beyond those that will be accessed through its new contracts. The ODS Director told us that ODS plans to use the $5 million to hire staff, specifically safety evaluators and technical support staff. The other $5 million is to be used for postmarket drug safety work throughout CDER and those plans had not been finalized as of February 2006. The Director of ODS said that given the high cost of planning and conducting observational studies, only one or two studies can be funded each year. According to the ODS Director, FDA has started to work with the Centers for Medicare & Medicaid Services to obtain access to data on Medicare beneficiaries’ experience with prescription drugs covered under the new prescription drug benefit, which began in 2006. This data source may provide information about drug utilization for a very large population of Medicare recipients and can potentially be linked to claims data, providing information about patients’ medical outcomes. According to the ODS Director, a team of ODS staff has been working with the Centers for Medicare & Medicaid Services to determine what data elements ODS would seek to access; however, it is uncertain how useful the data will be because there are potential data reliability issues. For example, it is unclear whether ODS will be able to do medical chart reviews to verify medical outcomes. Additionally, in April 2005 FDA requested information from other organizations about their active surveillance programs in the United States for identifying serious adverse events. In its request, FDA noted that it was seeking information related to these programs because active surveillance would strengthen and complement the tools it currently has to monitor postmarket drug safety. As an example, FDA noted interest in learning about systems that can identify specific acute outcomes for which a drug is frequently considered as a potential cause, such as acute liver failure and serious skin reactions. According to the ODS Director, a working group within ODS is currently evaluating the responses to the request for information; however, it is unlikely that they will fund any of these active surveillance systems in 2006 because FDA needs to ensure that such systems are able to identify drug safety concerns earlier compared to other data sources before the agency invests in them. The working group’s review of the request for information was still ongoing as of March 2006. Postmarket drug safety decision making at FDA is a complex process that sometimes results in disagreements, as observed in our case studies. Scientific disagreements may be expected in a large regulatory agency, especially given the different professional orientations of the key players, OND and ODS, and the inherent limitations of the available data. However, because of the potential public health consequences of FDA’s decisions about postmarket drug safety issues, it is important to come to a decision quickly. In our review, we observed opportunities for improving the clarity and oversight of the process and strengthening the information used for decision making. FDA has recently made some important organizational and policy changes, but more could be done to improve management oversight of postmarket drug safety issues, to improve the dispute resolution process, and to strengthen the collaboration between OND and ODS. In order to address the serious limitations of the data, FDA will need to continue its efforts to develop useful observational studies and to access and use additional healthcare databases. However, even if FDA is successful in expanding its data sources for postmarket drug safety surveillance, it would still benefit from information from long-term clinical trials of certain drugs and the additional authority to require that these studies be carried out. To improve the decision-making process for postmarket drug safety, the Congress should consider expanding FDA’s authority to require drug sponsors to conduct postmarket studies, such as clinical trials or observational studies, as needed, to collect additional data on drug safety concerns. To improve the postmarket drug safety decision-making process, we recommend that the Commissioner of FDA take the following four actions: establish a mechanism for systematically tracking ODS’s recommendations and subsequent safety actions; with input from the DSB and the Process Improvement Teams, revise and implement the draft policy on major postmarket drug safety decisions; improve CDER’s dispute resolution process by revising the pilot program to increase its independence; and clarify ODS’s role in FDA’s scientific advisory committee meetings involving postmarket drug safety issues. FDA reviewed a draft of this report and provided comments, which are reprinted in appendix V. FDA also provided technical comments, which we incorporated as appropriate. FDA commented that our conclusions were reasonable and consistent with actions that it has already begun or planned. FDA did not comment on our recommendations. In addition, FDA made six comments about specific aspects of our draft report. First, concerning our description of the complexity of the postmarket decision-making process, FDA stated that the draft report implied the process is too complex and that FDA should not be criticized for its difficult task of weighing the risks and benefits associated with drugs with the data available to the agency. We agree with FDA that postmarket drug safety issues are inherently complex. For that reason, we believe that FDA needs to have greater clarity about how decisions are made and to establish more effective oversight of the decision-making process. Furthermore, we believe that our report fairly characterizes the limitations of the data that FDA relies on in this complex process. Because of the data limitations, we believe that FDA needs greater authority to access certain kinds of postmarket safety data. Second, FDA noted that factors other than PDUFA goals influence OND’s work and its pace. FDA also stated that ODS plays a role in certain premarket safety activities and that PDUFA goals also apply to these activities. We clarified these points in the report. Third, FDA stated that referring to ODS as a consultant to OND understates the role of ODS in drug safety and that CDER considers ODS and OND to be equal partners in the identification and timely resolution of drug safety issues. As we stated in the draft report, we found that the central focus of the process is the iterative interaction between OND and ODS. Nonetheless, ODS does not have any independent decision-making responsibility while OND has the ultimate responsibility to make decisions about regulatory actions concerning the postmarket safety of drugs. Further, both OND and ODS refer to ODS reports on drug safety as consults. For these reasons, we believe that our description of ODS as a consultant to OND is accurate. Fourth, FDA agreed with our statements about the role of the DSB and indicated that the DSB has reviewed current mechanisms for identifying safety issues and discussed ways to enhance the tracking of those issues. Fifth, FDA commented that our examples of ODS staff being excluded from advisory committee meetings imply that such disagreements occur frequently. FDA stated that this is not the case, and that OND and ODS work cooperatively in the vast majority of cases. However, our work demonstrates a need for further clarification of ODS’s role. Finally, FDA commented that our case study chronology for Arava was incomplete because it did not describe two meetings. We provided additional clarification in the report about the meetings in the chronology for Arava. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to others who are interested and make copies available to others who request them. If you or your staffs have any questions about this report, please contact me at (202) 512-7119 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Arava was approved for marketing in 1998. Arava is indicated in adults for the treatment of active rheumatoid arthritis to reduce the signs and symptoms of the disease, slow down damage to joints, and improve physical function. Arava has been associated with cases of serious liver injury, some of which have been fatal. In this case, the Office of Drug Safety (ODS) identified a serious safety signal—hepatic failure and fatal hepatitis—associated with Arava in March 2001. A citizen’s petition in 2002 spurred further inquiry into the issue. An ODS analysis of adverse event reports concluded that Arava was associated with a substantial increased risk of liver failure and recommended removal from the market, but the Office of New Drugs (OND) disagreed. OND established an internal panel of senior staff and hired outside consultants to further review the reports of liver failure, and both the panel and outside consultants concluded that in most cases Arava was not causally related to liver failure. In 2003 a Food and Drug Administration (FDA) advisory committee meeting was held to discuss Arava and ODS staff were not allowed to present their analysis. FDA approved revised labeling of Arava in 2003 that strengthened the drug’s warnings, and it remained on the market as of February 2006. FDA approved Arava for marketing. At approval there was a known risk of liver toxicity (hepatotoxicity); in clinical trials Arava was associated with elevated liver enzymes in a significant number of patients. This information was included in the original label. During routine surveillance of incoming adverse event reports, an ODS safety evaluator had identified 11 cases of hepatic failure and fatal hepatitis associated with the use of Arava. The safety evaluator recommended that Arava’s label mention more extensive liver damage, such as liver-related fatalities. The ODS Division Director who reviewed the consult concurred with the findings and recommendation, but the OND Division of Anti-Inflammatory, Analgesic, and Ophthalmic Drug Products did not. OND did not agree with the findings or recommendation because officials were uncertain about the causal relationship between Arava and liver damage in the case reports and they believed that the current label was adequate for communicating risk about hepatotoxicity. Public Citizen, a national nonprofit public interest organization, filed a petition requesting that FDA immediately remove Arava from the U.S. market. Public Citizen said that a significantly higher number of serious adverse events, including fatal liver toxicity, had been associated with Arava, compared with another drug used to treat patients with rheumatoid arthritis. In response to the petition, OND requested that ODS review postmarket data for serious hepatic events and liver failure since the approval of Arava. ODS and OND staff met to discuss ODS’s preliminary work in response to the Public Citizen request. ODS’s preliminary review concluded that Arava was associated with a substantially increased risk for acute liver failure and recommended removal from the market. OND disagreed with the review. Because of the disagreements about causality, OND established a panel of senior-level Center for Drug Evaluation and Research (CDER) staff, which included managers from OND and ODS. The panel met twice to review U.S. postmarket reports of 16 cases of acute liver failure and to vote on the probability that Arava caused the liver injury. The majority of panel members voted that Arava was likely to be causally related to liver failure in only 2 of the cases. ODS staff finalized their review on Arava and sent the consult to OND. The report included the recommendation to remove Arava from the market because the authors believed that the risks of Arava greatly exceeded its benefits and because the available risk management strategies (for example, label changes and periodic liver enzyme monitoring) had been shown to be ineffective in minimizing risk for other drugs. The ODS Division Director who reviewed the consult concurred with the findings and recommendation. The ODS Director and the Office of Pharmacoepidemiology and Statistical Science (OPaSS) Director also reviewed the consult. Both disagreed with the findings and recommendation. At the request of OND, an ODS safety evaluator reviewed adverse event reports of liver injury associated with Arava from outside the United States. The ODS safety evaluator, who did not work on the prior analysis of the U.S. cases, analyzed 13 cases of liver failure and concluded that there was a possible association between the use of Arava and the development of liver failure. The safety evaluator also concluded that these findings were consistent with the earlier ODS findings in the 16 U.S. liver failure cases. The ODS Division Director who reviewed the consult concurred with the findings. Because of the disagreement on Arava’s safety, OND had hired outside consultants, including two hepatologists, to further review Arava’s safety profile. The hepatology consultants completed their analysis, which included a review of the U.S. reports of acute liver failure, by mid- December 2002. They identified no definite cases of Arava-induced liver failure, but found some cases to be possibly related to Arava. FDA’s Arthritis Advisory Committee met to review Arava’s benefit-to-risk profile and ways to improve risk management, and to discuss whether Arava should be approved for a claim of improvement in physical function. OND presented its own analysis of the postmarket safety data, and did not allow ODS staff to present their analysis of postmarket safety data. A former OND manager told us that OND believed that the ODS analysis did not have scientific merit. FDA’s Advisory Committee voted unanimously that Arava’s benefits in rheumatoid arthritis outweighed its potential risks and that its risks were no greater than other similar drugs. The committee also voted that Arava should be approved for a claim of improvement in physical function. ODS’s epidemiologists and safety evaluators submitted a letter to the ODS and OPaSS Directors, expressing their concerns with the Arthritis Advisory Committee meeting. They recommended that ODS staff should present postmarket safety data at advisory committee meetings and that there should be a policy that defines the role of ODS at all advisory committee meetings involving postmarket safety issues. CDER’s Director and Deputy Director sent a memo about ODS’s November 2002 consult to the ODS Director, an ODS Division Director, and the OPaSS Director. The memo criticized the quality of ODS’s consult and stated that ODS had analyzed postmarket data on Arava with a “bias toward concluding that the risk is as large as possible.” The memo also included the general expectations for an ODS consult. For example, it stated that consults should include a summary of the strengths and weaknesses of the analytic approach used to evaluate postmarket data. FDA approved revised labeling of Arava to support the claim of improved physical function. The revised labeling also stated that rare cases of severe liver injury, including cases with fatal outcomes, had been reported in Arava users. OND decided that although the liver toxicity risk was very rare, the accumulated evidence provided support for strengthening the warnings on the label. OND asked the sponsor to submit liver-related adverse events within 15 days rather than annually, on the basis of an ODS request. The sponsor issued a Dear Healthcare Professional letter explaining the labeling changes approved in June 2003. Information was added to Arava’s label about the use of Arava in pediatric populations, including instances of liver-related adverse reactions from pediatric study reports. FDA sent a letter to Public Citizen denying its request to remove Arava from the U.S. market. Baycol was approved for marketing in 1997. Baycol is a member of the class of drugs known as statins that lower cholesterol levels in the body. Baycol was associated with rhabdomyolysis, a severe adverse reaction involving the breakdown of muscle fibers, which can lead to death. In this case, the Office of Drug Safety (ODS) and the Office of New Drugs (OND) agreed from the outset (spring 2001) that adverse event reports received for high-dose Baycol were alarming. At the request of OND, ODS conducted an analysis that verified the increased safety risk associated with Baycol, but it did not make specific recommendations for action. Shortly thereafter, OND and ODS met with the sponsor and the Food and Drug Administration (FDA) communicated to the sponsor that it was considering withdrawing the high-dose Baycol from the market. In August 2001 the sponsor voluntarily withdrew all doses of Baycol. FDA approved Baycol for marketing (doses up to 0.3 mg). The original label stated that rhabdomyolysis had been reported with the use of other statins. FDA approved a change in the warnings section of Baycol’s label to indicate that rare cases of rhabdomyolysis had been reported with Baycol and other drugs in the class. FDA also approved adding a new subsection—postmarketing adverse event reports (including rhabdomyolysis)—to the label. FDA approved the 0.4 mg dose of Baycol. FDA approved a change in Baycol’s label, requested by the sponsor, to include a contraindication with gemfibrozil (a member of a class of drugs called fibrates, which also lower cholesterol). The combined use of Baycol and gemfibrozil was contraindicated because of the risk for rhabdomyolysis. The sponsor issued a Dear Healthcare Professional letter shortly thereafter, explaining the labeling changes. At the request of OND’s Division of Endocrine and Metabolic Drug Products, ODS completed a postmarketing safety review of rhabdomyolysis resulting from the combined use of statins and fibrates. OND requested the review because sponsors of other statins (not Baycol) were seeking over-the-counter status for their drugs. ODS safety evaluators and an epidemiologist analyzed reports from the Adverse Event Reporting System (AERS) and calculated reporting rates of rhabdomyolysis for Baycol and other statins when taken alone, and in combination with gemfibrozil. The reporting rate for Baycol combined with gemfibrozil was higher than that of other statins combined with gemfibrozil. But the reporting rate for Baycol alone was only slightly higher compared with the other statins. On the basis of their findings and the severity of rhabdomyolysis as a clinical diagnosis, the ODS staff recommended that the statins not be granted over-the-counter designation. The ODS Division Director who reviewed the consult concurred. In agreement with ODS’s position, OND decided to discuss with the sponsor sending stronger messages to healthcare professionals about the adverse reaction. FDA approved the 0.8 mg dose of Baycol. FDA approved the addition of a patient package insert for Baycol. An ODS safety evaluator contacted the OND medical officer responsible for Baycol about reports of fatal rhabdomyolysis associated with Baycol, especially at the 0.8 mg dose, since ODS’s last consult in 2000. The medical officer agreed the data were alarming and asked for more analysis. At about the same time, the sponsor notified OND about a dose-related occurrence of adverse events. FDA approved several revisions to labeling for Baycol, including an emphasis that the correct starting dose of Baycol should be 0.4 mg because of the increased risk of rhabdomyolysis at higher doses. The sponsor issued a Dear Healthcare Professional letter explaining the changes. OND and ODS staff met with the sponsor to discuss concerns over the safety of Baycol. An ODS epidemiologist presented an analysis of fatal cases of rhabdomyolysis associated with the 0.8 mg dose of Baycol compared with Lipitor, another statin, and compared with the 0.4 mg dose of Baycol. ODS found that the risk of fatal rhabdomyolysis was higher for Baycol than for Lipitor. ODS also found that the risk appeared to be dose- related, with twice as many of the fatalities among patients taking the highest daily dose—0.8 mg—of Baycol (without concomitant gemfibrozil) compared with the lower dose—0.4 mg. At the meeting, FDA communicated to the sponsor that it was considering several safety actions to address its concerns about Baycol, including the withdrawal of the 0.8 mg dose, and a boxed warning with information about not exceeding a dosage of 0.4 mg daily and a contraindication with gemfibrozil. OND and ODS staff met with the sponsor again to discuss their ongoing concerns over the safety of Baycol, particularly concerns about the risk of rhabdomyolysis at higher doses or in combination with gemfibrozil. The sponsor proposed to (1) voluntarily withdraw the 0.8 mg dose in the United States, (2) add a boxed warning on the label about not exceeding a dose of 0.4 mg daily, and (3) add a boxed warning on the label for contraindicated use of Baycol and gemfibrozil. FDA asked the sponsor for a comprehensive analysis of the 0.4 mg dose. A week later, FDA announced that the sponsor voluntarily withdrew all doses of Baycol from the United States market and the sponsor issued a Dear Healthcare Professional letter explaining its decision. Bextra was approved for marketing in 2001. Bextra was part of the class of drugs known as the COX-2 selective nonsteroidal anti-inflammatory drugs (NSAID). Bextra was approved to relieve the symptoms of osteoarthritis and rheumatoid arthritis in adults, and to relieve painful menstrual cycles. Bextra was associated with serious, potentially fatal skin reactions, including Stevens-Johnson Syndrome and toxic epidermal necrolysis. Bextra was also later associated with an increased risk of serious cardiovascular events, similar to the other approved COX-2 drugs. In this case, after the Office of Drug Safety (ODS) did an analysis of serious skin reactions associated with Bextra in 2002, Bextra’s label was modified. ODS continued to do a series of analyses of adverse events associated with Bextra from 2003 to 2004, recommending in 2004 that there be a boxed warning, the most serious warning, on the label, but the Office of New Drugs (OND) disagreed. OND changed its position after ODS did a comparison, at OND’s request, of Bextra’s rate of serious skin reactions with the reporting rates of other similar drugs. A boxed warning was added to Bextra’s label in late 2004. In February 2005, two scientific advisory committees that met primarily about the cardiovascular risks associated with the COX-2 NSAIDs voted that Bextra’s overall risk-to- benefit profile supported continued marketing. But a few months later the Food and Drug Administration (FDA) came to a different conclusion and announced that the overall risk-to-benefit profile of Bextra was not favorable, and as a result requested that it be withdrawn from the market, which it was in April 2005. FDA approved Bextra for marketing. The sponsor had identified the occurrence of serious skin reactions, proposed adding information about this risk to the label, and proposed issuing a Dear Healthcare Professional letter. At the request of OND’s Division of Anti-Inflammatory, Analgesic, and Ophthalmic Drug Products, ODS staff reviewed reports of serious skin reactions in the Adverse Event Reporting System (AERS) for Bextra. They compared Bextra’s reporting rate of serious skin reactions with rates for Vioxx and Celebrex (other COX-2 NSAIDs), and the incidence in the general population. The ODS staff agreed that the label should be changed and that a Dear Healthcare Professional letter should be issued because the rates for Bextra were higher than those for Vioxx, Celebrex, and the general population. The ODS Division Director that reviewed the consult and OND concurred with the findings. FDA announced an updated label describing the risk for serious skin reactions associated with Bextra and that Bextra was contraindicated in patients with histories of allergic reactions to sulfa, a substance that Bextra contains. The sponsor issued a Dear Healthcare Professional letter explaining the updated label. The Division of Pediatrics and Therapeutics had asked ODS for a recommendation on whether Bextra should be studied in pediatric populations for the treatment of acute pain, as proposed by the sponsor. ODS staff recommended that Bextra not be studied in pediatric populations because of its risk of serious skin reactions in the adult population. In addition, ODS staff analyzed data from the National Center for Health Statistics and found that serious skin reactions generally occur more commonly in children than adults. The ODS Acting Division Director that reviewed the consult agreed with the analysis and recommendation as did the Division of Pediatrics and Therapeutics. However, OND disagreed with the recommendation and supported the study of Bextra in pediatric populations because staff in OND felt this drug could have value in certain pediatric populations, such as patients who cannot tolerate other NSAIDs. Ultimately, Bextra was not studied in children in part because, according to a former OND manager, OND deferred to ODS’s judgment on this recommendation. ODS staff updated their original analysis and concluded that the reporting rates for serious skin reactions associated with Bextra remained markedly elevated above the incidence in the general population and above the rates for Celebrex and Vioxx. ODS staff recommended adding another skin reaction to the warnings in the label and the ODS Acting Division Director that reviewed the consult concurred. Although OND did not respond to the consult, a former OND manager told us that it would not have been important to add this skin reaction to the label since the label already included the most severe forms of skin reactions. ODS staff updated their assessment of the risks of serious skin reactions associated with Bextra, on the basis of additional AERS reports, and commented on a risk management plan submitted by the sponsor. They recommended to OND several stronger safety actions, including a boxed warning and a medication guide, because the risk remained elevated compared with the incidence in the general population and relative to Celebrex and Vioxx (for example, 13-fold relative to Vioxx). The ODS staff stated that very little was known about the risk factors for serious skin reactions, making them difficult to avoid. In addition, they recommended that OND consider the clinical circumstances in which Bextra had a favorable benefit-to-risk profile relative to other treatment alternatives. Two ODS Division Directors that reviewed the consult concurred, but OND did not agree that Bextra needed stronger safety actions at this time. Bextra’s label was changed to include the statement that fatalities due to serious skin reaction had been reported. At the request of OND, ODS staff compared Bextra’s reporting rate of serious skin reactions with an antibiotic drug’s reporting rate because both Bextra and the antibiotic contained sulfa and both drugs were contraindicated in patients with known allergies to sulfa. ODS staff compared the reporting rates, but indicated in their consult that it was inappropriate to compare an antibiotic marketed for more than 30 years and was used for acute, potentially life-threatening illnesses with a recently marketed pain reliever that was generally used for a chronic non- life-threatening illness. The ODS Division Director that reviewed the consult concurred. However, the OND medical officer involved in the case maintained it was an appropriate comparison. ODS staff found a higher reporting rate for serious skin reactions associated with Bextra when compared with the rate for the antibiotic drug. At the request of OND, ODS staff compared Bextra’s rate of serious skin reactions with the reporting rates of Celebrex, Vioxx, and Mobic, anti- inflammatory drugs that are used to treat arthritis. ODS staff concluded that Bextra’s reporting rate continued to be elevated compared with the other drugs, including Mobic, which had no reported cases of serious skin reactions. As a result of this analysis, and the reports of death (at least four deaths have been associated with Bextra), OND asked Bextra’s sponsor for a boxed warning about this risk, which it previously did not support. The sponsor issued a Dear Healthcare Professional letter summarizing the serious skin reactions associated with Bextra and stated that it had proposed an updated label to FDA to expand previous warnings about the skin reactions. FDA announced that Bextra would carry a boxed warning for serious skin reactions. The sponsor also issued a Dear Healthcare Professional letter explaining these changes. A joint meeting of FDA’s Arthritis Advisory Committee and the Drug Safety and Risk Management Advisory Committee was held. The meeting was focused primarily on the cardiovascular risks of the COX-2 selective NSAIDs, including Bextra. The advisory committees voted (17 yes, 13 no, 2 abstentions) that Bextra’s overall risk-to-benefit profile supported continued marketing. After reviewing information from multiple sources, which included specific votes and recommendations that the advisory committees made in February 2005, FDA announced its conclusion that Bextra’s overall risk-to- benefit profile was not favorable and, as a result, requested that the sponsor voluntarily withdraw Bextra from the market. FDA concluded that in addition to its cardiovascular risk (similar to the other COX-2 drugs), Bextra already carried a boxed warning for serious skin reactions. While the other COX-2 drugs also had a risk for these serious skin reactions, the reporting rate appeared to be greater for Bextra. In addition, the occurrence of the skin reactions was unpredictable, for example, occurring after both short- and long-term use, making attempts to manage this risk difficult. Also, there were no data supporting a unique therapeutic benefit for Bextra over other available NSAIDs, which could have offset the increased risk of serious skin reactions. The sponsor agreed to withdraw the drug in the United States. Propulsid was approved for marketing in 1993. Propulsid was indicated for use in adults for the symptomatic relief of nighttime heartburn due to gastroesophageal reflux disease. Propulsid was associated with serious cardiac arrhythmias, including reports of death, and most of these adverse events occurred in patients who were taking other medications or suffering from underlying conditions known to increase the risk of cardiac arrhythmia. In this case there was general agreement about the safety concern between the Office of New Drugs (OND) and the Office of Drug Safety (ODS), but differing opinions within the Food and Drug Administration (FDA) over what safety actions should be taken regarding the drug. In 1997 FDA decided to continue to work with the sponsor to make changes to the drug’s label, which included a boxed warning, but some staff felt stronger actions were needed. An FDA-supported study later found that the boxed warning did not significantly deter use of the drug with contraindicated drugs or medical conditions. During this case, a task force within FDA was formed to help evaluate Propulsid’s safety and efficacy, and ODS staff conducted numerous analyses and made multiple recommendations for stronger safety actions, including a market withdrawal. The sponsor voluntarily removed the drug from the market in 2000. Propulsid is currently available through a limited-access program to ensure that only certain patients receive the medication. FDA approved Propulsid for marketing in tablet form. The sponsor submitted information to the Center for Drug Evaluation and Research (CDER) about reports of cardiac arrhythmias associated with the use of Propulsid. Subsequently, an ODS safety evaluator identified and reviewed 12 reports of torsade de pointes in FDA’s MedWatch Spontaneous Reporting System (SRS) and identified potential risk factors, including cardiac history and the concomitant use of several other drugs. OND’s Division of Gastrointestinal and Coagulation Drug Products agreed with ODS that this was a safety concern. Propulsid’s label was revised to state that it was contraindicated with certain other drugs which, when taken with Propulsid, can increase the concentration of Propulsid and lead to arrhythmias. A clinical study conducted by the sponsor provided this evidence. The label was also revised to include information about other risk factors, including a history of cardiac disease. The sponsor issued a Dear Healthcare Professional letter with similar information. FDA approved Propulsid for marketing in liquid form. A boxed warning was added to Propulsid’s label, specifying its contraindication with other drugs. The boxed warning also included the statement that some of the reported adverse events had resulted in death. The sponsor issued a Dear Healthcare Professional letter in October with similar information. An ODS epidemiologist identified and analyzed 46 adverse event reports of patients who developed serious cardiac arrhythmias while using Propulsid, from July 1993 through early October 1995, and concluded that many patients who developed arrhythmias had histories of cardiac and renal conditions. Most patients who developed arrhythmias were not taking contraindicated medications; as a result, the epidemiologist concluded that Propulsid may itself cause arrhythmias. The epidemiologist recommended that risk factors, such as histories of significant cardiac and renal disease, should be displayed in the label’s warning with the same emphasis as the contraindicated drugs. The ODS Division Director concurred with the consult. At the request of OND, an ODS safety evaluator searched SRS for all adverse event reports associated with Propulsid in children aged 19 years and younger. Although Propulsid was not approved for use in children, it had been prescribed to children (for example, in newborn infants for feeding problems such as reflux). Six children were reported to have had cardiac arrhythmias with the use of Propulsid and several other children had other cardiovascular events. The safety evaluator also reported that the estimated usage of Propulsid in children was increasing steadily. FDA rejected the sponsor’s application for a pediatric indication for Propulsid. OND established a task force within FDA to evaluate the safety and efficacy of Propulsid. The task force included members from OND and ODS. At its initial meeting, the task force decided to gather information from several sources, including the reviews done by ODS, in order to accurately assess the safety of Propulsid. As agreed in the June 1997 Propulsid task force meeting, an ODS epidemiologist reviewed adverse event reports of Propulsid users with serious arrhythmias. The epidemiologist found that in about half of the cases, patients had taken contraindicated drugs with Propulsid and that a high proportion of the remaining cases had medical problems that may have predisposed them to arrhythmias. The epidemiologist recommended that the risk factors, such as predisposing medical problems, should be displayed in the label’s warning with the same emphasis as the contraindicated drugs and that the recommended dosage should not be exceeded. The ODS Division Director who reviewed the consult concurred. The task force on Propulsid met for the second time. The group discussed information that was gathered on the safety of Propulsid. An ODS epidemiologist summarized her August 1997 consult, including her recommendation that predisposing medical problems should be displayed in the label’s warnings similar to the contraindicated drugs and that the recommended dosage should not be exceeded. She also noted that Propulsid was primarily being prescribed for off-label use. Other relevant studies were discussed, including a clinical trial study where 3 out of 32 healthy elderly volunteers had abnormal electrocardiogram results after exposure to Propulsid alone. An ODS safety evaluator reported that there were additional cases of serious, cardiovascular adverse events among children who were prescribed Propulsid. FDA approved a rapidly disintegrating tablet form of Propulsid for marketing. The task force on Propulsid met and decided to seek further input from a CDER-wide group about pursuing the following regulatory actions: adding the risk for cardiac arrhythmias with the use of Propulsid alone (for example, without taking contraindicated drugs) to the label; holding an advisory committee meeting; and withdrawing approval of all Propulsid formulations. OND’s Division of Gastrointestinal and Coagulation Drug Products consulted another OND division that was responsible for the drug Seldane to find out what information would be required to withdraw the approval of a drug since FDA had initiated proceedings to withdraw its approval of Seldane in 1996 for a similar cardiovascular side effect. That division recommended that data be gathered to support the assertion that Propulsid was still being coprescribed with contraindicated drugs despite the boxed warning and Dear Healthcare Professional letters. At the request of OND, an ODS epidemiologist evaluated the sponsor’s epidemiological study on risk of serious cardiac arrhythmias among Propulsid users. In this study the researchers concluded that serious cardiac arrhythmias were not associated with Propulsid. The ODS epidemiologist outlined several major limitations with the study, including the potential for the misclassification of arrhythmia in patients not diagnosed by an electrocardiogram. A meeting was held in CDER to discuss FDA’s regulatory options for Propulsid. This meeting included some senior-level managers in CDER and an FDA attorney. The OND medical officer responsible for Propulsid presented his concerns, including his conclusion that Propulsid should be removed from the market. Proceeding with a withdrawal from the market was discussed at the meeting. FDA continued to work with the sponsor to change Propulsid’s label. Some staff believed that stronger safety actions were needed. An ODS epidemiologist summarized reports of 186 patients who developed serious cardiac disorders and arrhythmias (including deaths) with and without contraindicated drugs from July 1993 through early May 1998. The ODS epidemiologist recommended to OND that the boxed warning should state that serious arrhythmias had occurred in Propulsid users who had not been taking contraindicated drugs, and that an accompanying Dear Healthcare Professional letter should be issued. The ODS epidemiologist also recommended that Propulsid’s labeling should state that the safety and effectiveness of Propulsid had not been demonstrated in pediatric patients for any indication. FDA announced revisions to the boxed warning that strengthened its warnings and precautions, and the sponsor issued a Dear Healthcare Professional letter explaining the revisions. The changes included the statement that Propulsid was contraindicated in patients with medical problems known to predispose them to arrhythmias, such as heart disease. The revision also stated that other therapies for heartburn should be used before Propulsid, and that the safety and effectiveness in pediatric patients had not been established. Also, the revised boxed warning included the statement that cardiac adverse events, including sudden death, had occurred among Propulsid users who were not taking contraindicated drugs. An ODS epidemiologist summarized cardiac adverse event reports from the beginning of Propulsid’s marketing (July 1993) through May 1998. There were 187 reports, including 38 deaths. FDA implemented a medication guide and unit-dose packaging for Propulsid. An ODS epidemiologist worked on a study to evaluate labeling compliance among Propulsid users, which was carried out through ODS’s cooperative agreement program. The study ultimately found that the boxed warning did not significantly deter the use of Propulsid with contraindicated drugs or medical conditions. The sponsor issued a Dear Healthcare Professional letter with information about revisions to the boxed warning. The revisions included two new contraindications and a new drug interaction. Similar revisions were incorporated into the medication guide. An ODS epidemiologist analyzed and summarized the reports of Propulsid users who developed cardiovascular problems, including deaths, in four separate consults. The reports included adult and pediatric patients who took Propulsid with and without contraindicated drugs and medical conditions. The ODS epidemiologist recommended to OND that other contraindications should be added to the label, including one for patients with structural heart defects. The ODS epidemiologist recommended that OND consider several safety actions, including asking the sponsor to conduct a clinical or epidemiological study on the association between Propulsid and cardiac adverse events in its users, and removing Propulsid from the market. ODS and OND staff and the CDER Director met to discuss further options for regulatory actions. It was decided that FDA would hold a public advisory committee meeting to discuss ways to reduce the occurrence of adverse events with Propulsid. The preliminary results of the cooperative agreement study were going to be presented at the advisory committee meeting. FDA announced further revisions to the boxed warning and that a public advisory committee meeting was scheduled for April. The label revision included new recommendations for performing diagnostic tests and a new contraindication for patients with electrolyte disorders. Similar revisions were incorporated into the medication guide. The sponsor issued a Dear Healthcare Professional letter explaining these revisions. FDA announced that the sponsor would withdraw Propulsid from the U.S. market as of July 14, 2000. FDA also announced that its scheduled public advisory committee meeting was cancelled. The sponsor announced that it would make Propulsid available to certain patients through an investigational limited-access program, approved by FDA. An ODS epidemiologist summarized reports of adverse events, including cardiovascular events, among patients enrolled in the limited-access program. The epidemiologist recommended that the availability of Propulsid should not be expanded from the limited-access program to a restricted distribution. The ODS Division Director who reviewed the consult agreed. The drug’s availability was not expanded. In addition to the contact named above, Martin T. Gahart, Assistant Director; Anne Dievler; Pamela Dooley; Cathleen Hamann; and Julian Klazkin made key contributions to this report. | In 2004, several high-profile drug safety cases raised concerns about the Food and Drug Administration's (FDA) ability to manage postmarket drug safety issues. In some cases there have been disagreements within FDA about how to address safety issues. In this report GAO (1) describes FDA's organizational structure and process for postmarket drug safety decision making, (2) assesses the effectiveness of FDA's postmarket drug safety decision-making process, and (3) assesses the steps FDA is taking to improve postmarket drug safety decision making. GAO conducted an organizational review and case studies of four drugs with safety issues: Arava, Baycol, Bextra, and Propulsid. Two organizationally distinct FDA offices, the Office of New Drugs (OND) and the Office of Drug Safety (ODS), are involved in postmarket drug safety activities. OND, which holds responsibility for approving drugs, is involved in safety activities throughout the life cycle of a drug, and it has the decision-making responsibility to take regulatory actions concerning the postmarket safety of drugs. OND works closely with ODS to help it make postmarket decisions. ODS, with a primary focus on postmarket safety, serves primarily as a consultant to OND and does not have independent decision-making responsibility. ODS has been reorganized several times over the years. There has been high turnover of ODS directors in the past 10 years, with eight different directors of the office and its predecessors. In the four drug case studies GAO examined, GAO observed that the postmarket safety decision-making process was complex and iterative. FDA lacks clear and effective processes for making decisions about, and providing management oversight of, postmarket safety issues. The process has been limited by a lack of clarity about how decisions are made and about organizational roles, insufficient oversight by management, and data constraints. GAO observed that there is a lack of criteria for determining what safety actions to take and when to take them. Certain parts of ODS's role in the process are unclear, including ODS's participation in FDA's scientific advisory committee meetings organized by OND. Insufficient communication between ODS and OND has been an ongoing concern and has hindered the decision-making process. ODS does not track information about ongoing postmarket safety issues, including the recommendations that ODS staff make for safety actions. FDA faces data constraints in making postmarket safety decisions. There are weaknesses in the different types of data available to FDA, and FDA lacks authority to require certain studies and has resource limitations for obtaining data. Some of FDA's initiatives, such as the establishment of a Drug Safety Oversight Board, a draft policy on major postmarket decision making, and the identification of new data sources, may improve the postmarket safety decision-making process, but will not address all gaps. FDA's newly created Drug Safety Oversight Board may help provide oversight of important, high-level safety decisions, but it does not address the lack of systematic tracking of ongoing safety issues. Other initiatives, such as FDA's draft policy on major postmarket decisions and regular meetings between OND divisions and ODS, may help improve the clarity and effectiveness of the process, but they are not fully implemented. FDA has not clarified ODS's role in certain scientific advisory committee meetings. FDA's dispute resolution processes for disagreements about postmarket safety decisions have not been used. FDA is taking steps to identify additional data sources, but data constraints remain. |
TBI is the injury most likely to result in death or permanent disability. Recent Centers for Disease Control and Prevention (CDC) data indicate that each year approximately 50,000 people die, 210,000 are hospitalized and survive, and 70,000 to 90,000 individuals are disabled due to a TBI. CDC cautions that these numbers underestimate the numbers of individuals sustaining a TBI because they exclude individuals seen in emergency departments or other outpatient settings but not admitted to the hospital. Other researchers estimate that for each person who dies of TBI, 5 people are hospitalized and 27 are examined in emergency rooms without overnight hospitalization. Almost one-half of all TBIs result from transportation-related incidents. Most of the remainder result from falls, assaults, sports and recreation, and firearm-related injuries. Younger adults generally are more likely to be injured than older adults. Adult males sustain a TBI more than twice as frequently as women, and blacks are more likely than whites or Hispanics to sustain a TBI and to die from their injury. People at the lowest income levels are at the greatest risk of sustaining a TBI. Adults with TBI frequently have difficulty with executive skills, such as managing time, money, and transportation. They also have difficulty with short-term memory, concentration, judgment, and organization, which are necessary to function independently in the community. Adults with TBI often have normal intelligence but are unable to transfer learning from one environment to another. Both the private and public sectors finance acute care services to adults with TBI. When the individual progresses past the acute phase, private health insurance typically limits coverage of rehabilitation therapies and does not cover long-term care or community-based support services. As families exhaust their financial resources, the public sector pays for a greater share of the services received. Federal funding is available for medical and social support services under Medicaid, vocational rehabilitation services provided through state VR agencies, and for independent living services. (See app. III for a summary of the broad categories of services provided through these programs by at least one of the states we contacted.) Medicaid provides health care for about 37 million disabled, blind, or elderly people and low-income families. At the state level, Medicaid operates as a health insurance program under a state plan covering both required and state-selected optional health care services. Generally, state plan benefits must be provided in the same amount, duration, and scope to all Medicaid beneficiaries. With the exception of nursing facility care, most services provided under the standard Medicaid program are medically oriented. Standard Medicaid programs generally do not provide many of the long-term community-based support services needed by many adults with TBI. To provide long-term home and community-based services for broad groups of Medicaid beneficiaries—such as the elderly disabled or physically disabled, including adults with TBI—states generally have used 1915(c) waivers. There are currently over 200 home and community-based waiver programs serving more than 250,000 individuals nationwide. Under these waivers, states, with HCFA approval, can waive one or more of the requirements for statewideness, income and resource standards, comparability of services, and equal provision of services, as long as the average per capita cost of providing these services will not exceed the cost of institutional care. States select the services, the service definition, the target population, and the number of individuals included under each HCFA-approved home and community-based waiver. Examples of services that can be provided under these waivers are personal care, homemaker, and nonmedical transportation services. Adults with TBI might benefit from some home and community-based services covered under broad-based waivers. However, these individuals often are unable to qualify for such services because the preadmission screening process may be oriented to physical rather than cognitive disabilities. For example, Colorado Medicaid reports that most adults with TBI are unlikely to qualify for the broad-based waiver for elderly and physically disabled individuals because the assessment weighs physical factors more heavily than cognitive factors. Pennsylvania has a home and community-based waiver for personal attendant services, but beneficiaries with cognitive impairment are excluded. In addition, home and community-based waivers targeted to individuals who are aged or physically disabled generally do not cover services needed by cognitively impaired individuals, such as cognitive rehabilitation. States generally use Medicaid home and community-based waivers to target Medicaid services to small groups of adults with TBI. Missouri, however, narrowly targets services from its standard Medicaid program to persons with TBI. Home and community-based waivers can be used by states to target select services to smaller, more specific groups of individuals, such as adults with TBI. HCFA reports that, as of June 1997, a total of 15 states have applied for and received TBI waivers. These programs are small, covering an estimated 2,478 individuals and $118 million in expenditures in 1996. Four of the states we contacted—Colorado, Minnesota, New Hampshire, and New Jersey—have TBI home and community-based waivers to compensate for the difficulty some adults with TBI experience in accessing services. In addition to services covered, the four waivers vary in terms of the target population, the number of individuals served, expenditures per individual, and the services covered. (See table 1.) The TBI waivers for three of the four states—Minnesota, New Hampshire, and New Jersey—target people in nursing facilities and similar institutions or at risk of institutional placement. Many of these individuals will likely require home and community-based services like those covered by the waiver for the remainder of their lives. In contrast, Colorado’s waiver targets adults with TBI in the hospital who receive post-hospital waiver services so that they can be discharged more quickly. Colorado estimates that individuals will receive services under the TBI waiver for 2 years; after that time, they will receive, if necessary, services under the home and community-based waiver for the elderly, blind, and disabled, which covers a less intense level of services. Minnesota’s TBI waiver covers two levels of home and community-based care: (1) for individuals at risk of nursing home placement and (2) for individuals at risk of placement in neurobehavioral units in hospitals. Some waiver services are covered by all four states: case management, personal care, respite care, environmental modifications, transportation, behavior modification programs, and day treatment or day care programs.Some type of alternative residential setting, cognitive rehabilitation, assistive technology, independent living training, specialized medical equipment and supplies, and mental health services are covered by three of the four states. Many of the services covered under the TBI waivers are similar to services required by other people with physical disabilities or chronic illnesses, such as personal care services or extended physical, occupational, and speech therapies. Some, however, are particularly useful to adults with TBI, such as cognitive rehabilitation or behavioral programming. (See app. IV for a comprehensive list of services covered under Colorado’s, Minnesota’s, New Hampshire’s, and New Jersey’s TBI waivers.) Fewer than 500 individuals are covered by these waivers in the four states, with large variation among the states in the number covered, ranging from 36 served in Colorado to 231 served in Minnesota. The actual cost per person also varies widely, ranging from less than $10,000 per person in Colorado to almost $80,000 in New Hampshire. The differences in actual cost per person reflect differences in the target population. For example, according to Colorado Medicaid, the lower cost per person reflects the fact that the waiver targets individuals who, although they receive costly treatment following discharge from the hospital, receive these services for only a short period of time. In contrast, New Hampshire reports that their higher cost per person reflects their target population, who are more disabled than TBI waiver recipients in other states and whom other states generally do not place in the community. In its standard Medicaid program, Missouri includes a package of services targeted specifically to adults with TBI, including neuropsychological, psychological, vocational, and recreational services, as well as physical, occupational, and speech therapies. Adults with TBI receive this service package for 6 to 12 months. In state fiscal year 1996, Missouri Medicaid provided its TBI service package to an average of 19 persons each month at a cost totaling almost $614,000. Missouri Medicaid officials chose to narrowly target these services to adults with TBI under the standard Medicaid program because this was administratively simpler than a home and community-based waiver. VR and ILS—Department of Education programs administered by the states—provide services to disabled adults, including adults with TBI, to support their reentry into the community. VR programs provide vocational rehabilitation services to help disabled individuals prepare for and obtain employment. ILS provides training, peer support, advocacy, and referral through a decentralized system of federally funded ILS programs to help people with disabilities live independently. Both programs are financed by a combination of federal and state funds—totaling roughly $2.5 billion in 1996—and receive referrals from a variety of sources. VR provides vocational rehabilitation services to individuals with disabilities, including adults with TBI, to prepare them for and support them during their transition to employment. To be eligible, individuals must have a documentable disability that impedes employment but does not preclude the ability to work and must demonstrate a need for vocational rehabilitation services. Eligible individuals and VR counselors develop an individualized plan that includes an employment objective and services needed to reach that objective. These services can include rehabilitative therapies and supported employment services, which provide individuals who are integrated into a work setting post-employment support—such as job coaching or on-the-job training—to help facilitate their transition to employment. VR generally can provide supported employment services, for a maximum of 18 months; after this time, states must either find additional funds to pay for continuing services or discontinue the services. Adults with TBI, however, may still need these services to continue working. All federally funded ILS centers are required to provide four core services—independent living skills training, peer support, advocacy, and referral—to individuals with disabilities, including adults with TBI, on a continuing basis. Whether a center purchases additional services for consumers is determined locally. As a result, there is likely to be variation in whether ILS offers other services, such as personal assistant services or home modification, from state to state and within a state. ILS emphasizes peer support and consumer-directed action. The adult with TBI is provided information and peer support to determine his or her specific needs as well as referrals and advocacy from an ILS specialist. Trainers—generally individuals with similar disabilities—help the consumer identify barriers and ways to get around them. In some of the states we contacted, TBI experts expressed concern about the ILS model of consumer-directed needs assessment. Adults with TBI often do not recognize their own limitations and lack executive skills to coordinate services. Medicaid, VR, and ILS expenditures for adults with TBI are small relative to total program expenditures. Total Medicaid expenditures for adults with TBI are unknown, but the expenditures for TBI home and community-based waiver services alone in three of the four states with these waivers are greater than the combination of VR expenditures for adults with TBI and all ILS expenditures. States with small Medicaid programs targeted specifically to adults with TBI are able to identify the costs of these programs. VR agencies are able to identify the costs of services to adults with TBI. However, the costs of adults with TBI served by ILS or the entire Medicaid program cannot be determined. Medicaid waiver expenditures for 1996 vary widely, from $300,000 in Colorado to $6.6 million in New Jersey; VR and ILS expenditures vary less. (See table 2 for federal and state expenditures in these programs.) Five states that we contacted—Arizona, Florida, Massachusetts, Missouri, and Pennsylvania—have developed programs funded exclusively by the state to provide services to a generally small number of adults with TBI.These programs—which obtain services from other programs and pay only for services that cannot be financed otherwise—are more flexible than Medicaid waiver programs. For example, Massachusetts’ program has a sliding fee scale for services, which would not be permitted under Medicaid. Florida and Missouri have no income requirement for case management services, although Missouri restricts other services to those whose income is at or below 185 percent of poverty. While case management is a key component of each of these programs,the funds available to purchase services vary widely, as do the number of people served. (See table 3.) The states’ administration of their programs varies somewhat with regard to program referral, restrictions, and oversight. Four programs receive referrals from individuals, families, providers, advocates, and other state agencies. Florida’s program, however, receives notification from a central registry—to which admitting hospitals are mandated to report—of all individuals with a TBI who are hospitalized overnight. Individuals reported to the central registry are assigned to case managers, who provide the individual and his or her family with information on all available resources. The Florida program tries to refer as many individuals as possible to the vocational rehabilitation program with the objective of returning them to work. Four of the five states—Arizona, Florida, Massachusetts, and Missouri—do not place limits on the length of time services can be provided. In contrast, Pennsylvania places time—and cost—limits on services provided. In Pennsylvania, adults with TBI are limited to 36 months for case management services and 2 years for rehabilitation services. To date, however, Pennsylvania has only enforced its cost limit, which is $125,000 per year per person for rehabilitation. Four of the five states—Pennsylvania is the exception—have legislatively mandated that their state-funded programs have an advisory council to provide guidance and oversight. These advisory councils are generally composed of representatives of persons with TBI, state agencies concerned with TBI, and experts in the field. Some adults with TBI encounter substantial barriers in accessing services that will support their reintegration into the community. Although the states we contacted have developed strategies to expand such services, a small number of individuals relative to the number of adults with TBI are generally served by these programs. For example, in 1996, Colorado provided services under its TBI Medicaid waiver to 36 adults and Missouri served 223 in its state-funded program; GAO analysis shows that Colorado and Missouri have 4,006 and 5,578 individuals, respectively, who sustain a TBI each year. Florida is the exception. In 1996, Florida served more than 3,100 individuals with TBI, and the state estimates that 1,829 residents sustain a TBI each year. We asked program representatives and experts to describe individuals who have the greatest difficulty in accessing services from these and other programs and the consequences of being unable to access services. These experts most frequently identified three groups: individuals who are cognitively impaired but lack physical impairments, individuals without personal advocates, and individuals with problematic behaviors. They reported that many of these people ultimately end up homeless or in nursing homes, institutions for mental illness, prisons, and other institutions. Individuals who are cognitively impaired but lack physical disabilities are less likely than those with more visible impairments to obtain services. Experts repeatedly told us that adults with TBI who walk, talk, and look “normal” are refused services, even though they cannot maintain themselves in the community without help. Cognitively impaired people frequently lack executive skills—such as managing time, money, and other aspects of daily living—and have difficulty functioning independently. This difficulty will most likely last throughout their lifetime. These individuals frequently do not qualify for Medicaid waiver services under programs for the physically disabled because they have little to no difficulty in bathing, dressing, eating, or other activities of daily living used to assess disability. The services needed by these adults with TBI—which may include someone to remind them to pay the bills or provide assistance in figuring out their bank balance—are relatively low-cost but crucial to their ability to live in the community. The lack of executive skills also complicates the ability of adults with TBI to negotiate the various service delivery systems. People without someone to act as their personal advocate have difficulty obtaining services from multiple programs. We repeatedly heard that an adult with TBI without an effective and knowledgeable advocate would probably not receive services. People without social support systems or whose social support systems fail also fall into this category. Adults with TBI often return to their parents’ home following hospital discharge. Even those married at injury may be cared for by their parents, since many married adults with TBI divorce post-injury. TBI advocates report that parents who have been the primary caregiver frequently are unable to continue to provide care due to exhaustion, aging, or death. As a result, individuals cared for by their parents for years suddenly appear, trying to obtain services to remain in the community. People with problematic behaviors—such as aggression, destructiveness, or participation in illegal activities—generally do not have the skills required to return to the community and usually require expensive treatment in residential environments with a great deal of structure. Without treatment, these individuals are the most likely to become homeless, be committed to a mental institution, or be sentenced to prison. A number of providers, such as day treatment or outpatient rehabilitation programs and nursing homes, often will not accept people with behavioral problems, either because of potential disruption to their programs or because they claim that Medicaid reimbursement rates do not compensate them for the resources needed to care for these individuals. For example, we were told about one person who had been discharged from 14 nursing homes in 6 months due to behavioral problems. Some of the states we contacted do not have programs for adults with TBI who have behavioral problems. Minnesota funds treatment for limited numbers of individuals with the most severe behaviors, but funding at a lower level may be inadequate to provide services for those less severely affected. With faster emergency response and advances in technology and treatment, the number of persons surviving a TBI has increased. A substantial number of adults with TBI are cognitively impaired and some have physical disabilities; however, their longevity is usually not affected. As a result, individuals with permanent disability require long-term supportive services to remain in the community. The nine states we contacted deliver long-term community-based services to adults with TBI through Medicaid or state-funded programs. As shown by our analysis of Medicaid programs targeted specifically to adults with TBI and state-financed programs, few adults with TBI are being served by these programs. Based on state reports of the number of individuals who sustain a TBI in a year, the gap between the number receiving long-term services and the estimated number of disabled adults with TBI remains wide. We provided a draft of this report to the Administrator of HCFA. We also provided draft reports to officials at the Department of Education, CDC, National Institutes of Health, the Health Resources and Services Administration, and the Brain Injury Association; Medicaid officials; vocational rehabilitation officials in each of five states with Medicaid programs specifically targeting adults with TBI; and officials of the five state-funded programs for persons with TBI. A number of these officials provided technical or clarifying comments, which we incorporated as appropriate. In addition, CDC pointed out the need for data and referral systems by which persons with TBI-related disability are identified and referred for services. CDC suggested that components of such systems might include, for example, population-based registries of persons sustaining acute TBI (developed in conjunction with state TBI surveillance systems) and guidelines for acute care providers and hospitals pertaining to follow-up service referral for patients with TBI. In many or most jurisdictions, such systems do not exist, with the result that many persons with TBI-related disabilities—especially those who have sustained less severe injuries—may be unaware of the availability of services. CDC’s comments reinforce our conclusions that the need for services among people with TBI appears to greatly exceed the services delivered. We will send copies of this report to the Secretaries of the Departments of Health and Human Services and Education, the Administrator of HCFA, state officials in the nine states we interviewed, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. Please contact me on (202) 512-7114 or Phyllis Thorburn on (202) 512-7012 if you or your staff have any questions. Major contributors to this report are Sally Kaplan and Mary Ann Curran. The Congress passed the Traumatic Brain Injury Act of 1996 (P.L. 104-166) to expand efforts to identify methods of preventing TBI, expand biomedical research efforts to prevent or minimize the severity of dysfunction as a result of TBI, and to improve the delivery and quality of services through state demonstration projects. The legislation authorizes CDC to carry out projects to reduce the incidence of TBI, the National Institutes of Health (NIH) to grant awards for basic and applied TBI research, and the Health Resources and Services Administration (HRSA) to carry out demonstration projects to improve access to services for the assessment and treatment of TBI. A total of $24.5 million for fiscal years 1997 through 1999 was authorized for the act. In response to the authorizations included in the Traumatic Brain Injury Act, CDC issued grants to 11 states in July 1997 to develop new TBI surveillance projects and planned to submit reports to the Congress on surveillance projects in spring 1998 and in 1999. A grant to develop an additional state TBI registry is scheduled to be awarded in summer 1998. NIH plans to conduct a TBI consensus development conference in October 1998. The consensus panel will address the epidemiology, consequences, treatment, and outcomes of TBI and make recommendations regarding rehabilitation practices and research needs. HRSA awarded demonstration project grants to 21 states, which became effective October 1997. We focused our study on post-acute services provided to individuals who sustain a TBI as adults. We defined post-acute services as those provided after hospital discharge. In most of the states we contacted, individuals injured at age 22 or older are considered differently than individuals injured prior to age 22, who receive services from programs for persons with a developmental disability. Based on a review of the literature and interviews with individuals knowledgeable about TBI, we assembled a list of 35 states that have developed programs targeted to persons with TBI. From that list, we selected nine states: four with Medicaid TBI home and community-based waivers and five with state programs providing direct services to adults with TBI. We selected the TBI waiver states with the largest (New Hampshire) and second smallest (Colorado) estimated per capita cost for 1996. Figure III.1 provides an overview of the broad categories of services provided to adults with TBI by standard Medicaid programs, by broad-based and TBI Medicaid home and community-based waivers, and by VR and ILS programs. Although there is substantial overlap among the general categories of service, there are differences in the groups to whom services are targeted, the requirements to obtain them, and the length of time services are provided. Figure IV.1 shows the specific services offered to adults with TBI by four of the states that we contacted—Colorado, Minnesota, New Hampshire, and New Jersey—under their TBI home and community-based Medicaid waivers. Case management is a Medicaid administrative function in Colorado. PT, OT, ST are physical, occupational, and speech therapies. Other services include substance abuse counseling, home health care, family support services, crisis response, community support, supported employment, or night supervision. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed federal and state efforts to provide services the individuals with traumatic brain injury (TBI), focusing on the: (1) primary federal and state programs that provide adults with TBI services to help them function more independently; (2) strategies that states have developed to enhance access to TBI-related services; and (3) circumstances believed to be most frequently associated with difficulty in obtaining services. GAO noted that: (1) adults with TBI receive services to facilitate their reintegration into the community primarily from three federal-state programs: Medicaid, vocational rehabilitation (VR), and Independent Living Services (ILS); (2) Medicaid provides medical, rehabilitation, and social support services to poor individuals with disabilities; (3) VR agencies provide services to individuals with disabilities to prepare them for and support them during the transition to employment; (4) ILS programs provide skills training to individuals with disabilities to facilitate their independence in the community; (5) all three programs are financed by a combination of federal and state funds and serve a range of individuals with disabilities, only a small number of whom have a TBI; (6) because most of the services covered by standard Medicaid programs are medical, all states have expanded Medicaid services through home and community-based waivers, which permit them to offer additional services--such as homemaker services, adult day care, and nonmedical transportation--to persons at risk of institutionalization; (7) these Medicaid waivers generally target long-term community-based services to a broad population, such as the physically disabled or disabled elderly; (8) recognizing the difficulties adults with TBI experience in accessing services, each of the states GAO contacted have developed various strategies to target services to adults with TBI; (9) five target Medicaid services specifically to limited numbers of adults with TBI; (10) despite these strategies, service gaps are likely--the number of adults with TBI who are provided services remains small relative to estimates of the total number; (11) according to program representatives and experts, those most likely to have difficulty accessing services are: (a) individuals with cognitive impairment but who lack physical disabilities; (b) individuals without an effective advocate to negotiate the social service system or without a social support system; and (c) individuals with problematic or unmanageable behaviors, such as aggression, destructiveness, or participation in illegal behaviors; and (12) without treatment, individuals with problematic or unmanageable behaviors are the most likely to become homeless, institutionalized in a mental facility, or imprisoned. |
Section 861 of the NDAA for FY2008 directed the Secretary of Defense, the Secretary of State, and the USAID Administrator to sign a memorandum of understanding (MOU) related to contracting in Iraq and Afghanistan. The law specified a number of issues to be covered in the MOU, including the identification of common databases to serve as repositories of information on contract and contractor personnel. The NDAA for FY2008 required the databases to track at a minimum: a brief description of the contract, its total value, and whether it was awarded competitively, and for contractor personnel working under contracts in Iraq or Afghanistan, total number employed, total number performing security functions, and total number who have been killed or wounded. In July 2008, DOD, State, and USAID signed an MOU in which they agreed the Synchronized Predeployment and Operational Tracker (SPOT) would be the system of record for the statutorily-required contract and contractor personnel information. The MOU specified SPOT would include information on DOD, State, and USAID contracts with more than 14 days of performance in Iraq or Afghanistan or valued at more than the simplified acquisition threshold, which the MOU stated was $100,000, as well as information on the personnel working under those contracts. While DOD is responsible for all maintenance and upgrades to the SPOT database, each agency agreed in the MOU to ensure that data elements related to contractor personnel, such as the number of personnel employed on each contract in Iraq or Afghanistan, are accurately entered into SPOT by its contractors. SPOT is designed to track contractor personnel by name and record information such as the contracts they are working under, deployment dates, and next of kin. Contract data elements, such as value and extent of competition, are to be imported into SPOT from the Federal Procurement Data System – Next Generation (FPDS- NG), the federal government’s system for tracking information on contracting actions. The need for information on contracts and contractor personnel to inform decisions and oversee contractors is critical given DOD, State, and USAID’s extensive reliance on contractors to support and carry out their missions in Iraq and Afghanistan. We have reported extensively on the management and oversight challenges of using contractors to support contingency operations and the need for decision makers to have accurate, complete, and timely information as a starting point to address those challenges. Although much of our prior work has focused on DOD, the lessons learned can be applied to other agencies relying on contractors to help carry out their missions. The agencies’ lack of complete and accurate information on contractors supporting contingency operations may inhibit planning, increase costs, and introduce unnecessary risk, as illustrated in the following examples: Limited visibility over contractors obscures how extensively agencies rely on contractors to support operations and help carry out missions. In our 2006 review of DOD contractors supporting deployed forces, we reported that a battalion commander in Iraq was unable to determine the number of contractor-provided interpreters available to support his unit. Such a lack of visibility can create challenges for planning and carrying out missions. Further, knowledge of who is on their installation, including contractor personnel, helps commanders make informed decisions regarding force protection and account for all individuals in the event of hostile action. Without incorporating information on contractors into planning efforts, agencies risk making uninformed programmatic decisions. As we noted in our 2004 and 2005 reviews of Afghanistan reconstruction efforts, when developing its interim development assistance strategy, USAID did not incorporate information on the contractor resources required to implement the strategy. We determined this impaired USAID’s ability to make informed decisions on resource allocations for the strategy. A lack of accurate financial information on contracts impedes agencies’ ability to create realistic budgets. As we reported in July 2005, despite the significant role of private security providers in enabling Iraqi reconstruction efforts, neither State, DOD, nor USAID had complete data on the costs associated with using private security providers. Agency officials acknowledged such data could help them identify security cost trends and their impact on the reconstruction projects, as increased security costs resulted in the reduction or cancellation of some projects. Lack of insight into the contract services being performed increases the risk of paying for duplicative services. In the Balkans, where billions of dollars were spent for contractor support, we found in 2002 that DOD did not have an overview of all contracts awarded in support operations. Until an overview of all contractor activity was obtained, DOD did not know what the contractors had been contracted to do and whether there was duplication of effort among the contracts that had been awarded. Costs can increase due to a lack of visibility over where contractors are deployed and what government support they are entitled to. In our December 2006 review of DOD’s use of contractors in Iraq, an Army official estimated that about $43 million was lost each year to free meals provided to contractor employees at deployed locations who also received a per diem food allowance. Many recommendations from our prior work on contractors supporting contingency operations focused on increasing agencies’ ability to track contracts and contractor personnel so that decision makers—whether out in the field or at headquarters—can have a clearer understanding of the extent to which they rely on contractors, improve planning, and better account for costs. While actions have been taken to address our recommendations, DOD, State, and USAID officials have told us that their ability to access information on contracts and contractor personnel to inform decisions still needs improvement. Specifically, information on contracts and the personnel working on them in Iraq and Afghanistan may reside solely with the contractors, be stored in a variety of data systems, or exist only in paper form in scattered geographical regions. These officials indicated that the use of SPOT has the potential to bring some of this dispersed information together so that it can be used to better manage and oversee contractors. DOD, State, and USAID have made progress in implementing SPOT. However, as we reported last month, DOD, State, and USAID’s on-going implementation of SPOT currently falls short of providing agencies with information that would help facilitate oversight and inform decision making, as well as fulfill statutory requirements. Specifically, we found that the agencies have varying criteria for deciding which contractor personnel are entered into the system and, as a result, not all required contractor personnel have been entered. While the agencies have used other approaches to obtain personnel information, such as periodic contractor surveys, these approaches have provided incomplete data that should not be relied on to identify trends or draw conclusions. In addition, SPOT, which was intended to serve as a central repository of information on contracts performed in Iraq or Afghanistan, currently lacks the capability to track required contract information as agreed to in the MOU. DOD, State, and USAID have been phasing in the MOU requirement to use SPOT to track information on contracts and the personnel working on them in Iraq and Afghanistan. In January 2007, DOD designated SPOT as its primary system for collecting data on contractor personnel deployed with U.S. forces and directed contractor firms to enter personnel data for contracts performed in Iraq and Afghanistan. State started systematically entering information for both Iraq and Afghanistan into SPOT in November 2008. In January 2009, USAID began requiring contractors in Iraq to enter personnel data into SPOT. However, USAID has not yet imposed a similar requirement on its contractors in Afghanistan and has no time frame for doing so. In implementing SPOT, DOD, State, and USAID’s criteria for determining which contractor personnel are entered into SPOT varied and were not consistent with those contained in the MOU, as the following examples illustrate: Regarding contractor personnel in Iraq, DOD, State, and USAID officials stated the primary factor for deciding to enter contractor personnel into SPOT was whether a contractor needed a SPOT- generated letter of authorization (LOA). However, not all contractor personnel, particularly local nationals, in Iraq need LOAs and agency officials informed us that such personnel were not being entered into SPOT. For Afghanistan, DOD offices varied in their treatment of which contractor personnel should be entered into SPOT. Officials with one contracting office stated the need for an LOA determined whether someone was entered into SPOT. As a result, since local nationals generally do not need LOAs, they are not in SPOT. In contrast, DOD officials with another contracting office stated they follow DOD’s 2007 guidance on the use of SPOT. According to that guidance, contractor personnel working on contracts in Iraq and Afghanistan with more than 30 days of performance and valued over $25,000 are to be entered into SPOT—as opposed to the MOU threshold of 14 days of performance or a value over $100,000. These varying criteria and practices stem, in part, from differing views on the agencies’ need to collect and use data on certain contracts and the personnel working on them. For example, some DOD officials we spoke with questioned the need to track contractor personnel by name as opposed to their total numbers given the cost of collecting detailed data compared to the benefit of having this information. However, DOD officials informed us that the agencies did not conduct any analyses of what the appropriate threshold should be for entering information into SPOT given the potential costs and benefits of obtaining such information prior to establishing the MOU requirements. As a result of the varying criteria, the agencies do not have an accurate or consistent picture of the total number of contractor personnel in Iraq and Afghanistan. Although officials from all three agencies expressed confidence that SPOT data were relatively complete for contractor personnel who need LOAs, they acknowledged SPOT does not fully reflect the number of local nationals working on their contracts. Agency officials further explained ensuring SPOT contains information on local nationals is challenging because their numbers tend to fluctuate due to the use of day laborers and local firms do not always track the individuals working for them. Absent robust contractor personnel data in SPOT, DOD, State, and USAID have relied on surveys of their contractors to obtain information on the number of contractor personnel. However, we determined the resulting data from these surveys are similarly incomplete and unreliable and, therefore, should not be used to identify trends or draw conclusions about the number of contractor personnel in each county. Additionally, officials from all three agencies stated that they lack the resources to verify the information reported by the contractors, particularly for work performed at remote sites where security conditions make it difficult for U.S. government officials to regularly visit. According to DOD officials, the most comprehensive information on the number of DOD contractor personnel in Iraq and Afghanistan comes from the U.S. Central Command’s (CENTCOM) quarterly census. As shown in table 1, DOD’s census indicated there were 200,807 contractors working in Iraq and Afghanistan as of the second quarter of fiscal year 2009, which is 83,506 more than what was reported in SPOT. However, DOD officials acknowledged the census numbers represent only a rough approximation of the actual number of contractor personnel in each country. For example, an Army-wide review of fiscal year 2008 third quarter data determined approximately 26,000 contractors were not previously counted. Information on these contractors was included in a subsequent census. As a result, comparing third and fourth quarter data would incorrectly suggest that the number of contractors increased, while the increase is attributable to more accurate counting. Conversely, there have also been instances of contractor personnel being double counted in the census. Although State reported most of its contractor personnel are currently entered into SPOT, the agency relied on periodic inquiries of its contractors to obtain a more complete view of contractor personnel in the two countries. State reported 8,971 contractor personnel were working on contracts in Iraq and Afghanistan during the first half of fiscal year 2009. Even relying on a combination of data from SPOT and periodic inquiries, it appeared State underreported its contractor personnel numbers. For example, although State provided obligation data on a $5.6 million contract for support services in Afghanistan, State did not report any personnel working on this contract. USAID relied entirely on contractor surveys to determine the number of contractor personnel working in Iraq and Afghanistan. The agency reported 16,697 personnel worked on its contracts in Iraq and Afghanistan during the first half of fiscal year 2009. However, we identified a number of contracts for which contractor personnel information was not provided, including contracts to refurbish a hydroelectric power plant and to develop small and medium enterprises in Afghanistan worth at least $6 million and $91 million, respectively. Although some information on contracts is being entered into SPOT, the system currently lacks the capability to accurately import and track the contract data elements as agreed to in the MOU. While the MOU specifies contract values, competition information, and descriptions of the services being provided would be pulled into SPOT from FPDS-NG, this capability is not expected to be available until 2010. Once the direct link is established, pulling FPDS-NG data into SPOT may present challenges because of how data are entered. While contract numbers are the unique identifiers that will be used to match records in SPOT to those in FPDS- NG, SPOT users are not required to enter the numbers in a standardized manner. In our review of SPOT data, we identified that at least 12 percent of the contracts had invalid contract numbers and, therefore, could not be matched to records in FPDS-NG. Additionally, using contract numbers alone may be insufficient since specific task orders are identified through a combination of the contract and task order numbers. However, SPOT users are not required to enter task order numbers. For example, for one SPOT entry that only had the contract number without an order number, we found that DOD had placed 12 different orders—ranging from a few thousand dollars to over $129 million—against that contract. Based on the information in SPOT, DOD would not be able to determine which order’s value and competition information should be imported from FPDS-NG. As SPOT is not yet fully operational as a repository of information on contracts with performance in Iraq and Afghanistan, DOD, State, and USAID relied on a combination of FPDS-NG, agency-specific databases, and manually compiled lists of contract actions to provide us with the contract information necessary to fulfill our mandate. None of the agencies provided us with a cumulative listing of all their contract actions for Iraq and Afghanistan. Instead, they provided a total of 48 separate data sets that we then analyzed to identify almost 85,000 contracts with performance in Iraq and Afghanistan that totaled nearly $39 billion in obligations in fiscal year 2008 and the first half of fiscal year 2009. Our analyses involved compiling the data from the multiple sources, removing duplicate entries, and standardizing the data that were reported. To address the shortcomings we identified in the agencies’ implementation of SPOT, we recommended in our October 2009 report that the Secretaries of Defense and State and the USAID Administrator jointly develop and execute a plan with associated timeframes for their continued implementation of the NDAA for FY2008 requirements, specifically ensuring that the agencies’ criteria for entering contracts and contractor personnel into SPOT are consistent with the NDAA for FY2008 and with the agencies’ respective information needs for overseeing contracts and contractor personnel; revising SPOT’s reporting capabilities to ensure that they fulfill statutory requirements and agency information needs; and establishing uniform requirements on how contract numbers are to be entered into SPOT so that contract information can accurately be pulled from FPDS-NG as agreed to in the MOU. In commenting on our recommendation, DOD and State disagreed with the need for a plan to address the issues we identified. They cited ongoing coordination efforts and anticipated upgrades to SPOT as sufficient. While USAID did not address our recommendation, it similarly noted plans to continue meeting with DOD and State regarding SPOT. We believe continued coordination among the three agencies is important. They should work together to implement a system that is flexible across the agencies but still provides detailed information to better manage and oversee contractors. However, they also need to take the actions contained in our recommendation if the system is to fulfill its potential. By jointly developing and executing a plan with time frames, the three agencies can identify the concrete steps they need to take and assess their progress in ensuring the data in SPOT are sufficiently reliable to fulfill statutory requirements and their respective agency needs. Absent such a plan and actions to address SPOT’s current shortcomings, the agencies will be reliant on alternative sources of data—which are also unreliable and incomplete. As a result, they will continue to be without reliable information on contracts and contractor personnel that can be used to help address some longstanding contract management challenges. Messrs. Chairmen, this concludes my prepared statement. I would be happy to respond to any questions you or the other commissioners may have. For further information about this statement, please contact John P. Hutton (202) 512-4841 or huttonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Johana R. Ayers, Assistant Director; Noah Bleicher; Raj Chitikila; Christopher Kunitz; Heather Miller; and Morgan Delaney Ramaker. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This statement discusses ongoing efforts by the Department of Defense (DOD), the Department of State (State), and the U.S. Agency for International Development (USAID) to track information on contractor personnel and contracts in Iraq and Afghanistan. Reliable, meaningful data on contractors and the services they provide are necessary to inform agency decisions on when and how to effectively use contractors, provide support services to contractors, and ensure that contractors are properly managed and overseen. The importance of such data is heightened by the unprecedented reliance on contractors in Iraq and Afghanistan and the evolving U.S. presence in the two countries. The statement focuses on (1) how information on contractor personnel and contracts can assist agencies in managing and overseeing their use of contractors and (2) the status of DOD, State, and USAID's efforts to track statutorily-required information on contractor personnel and contracts in Iraq and Afghanistan, as well as our recent recommendations to address the shortcomings we identified in their efforts. This statement is drawn from our October 2009 report on contracting in Iraq and Afghanistan, which was mandated by section 863 of the National Defense Authorization Act for Fiscal Year 2008 (NDAA for FY2008), and a related April 2009 testimony. Our prior work was prepared in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audits to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The need for information on contracts and contractor personnel to inform decisions and oversee contractors is critical given DOD, State, and USAID's extensive reliance on contractors to support and carry out their missions in Iraq and Afghanistan. The agencies' lack of complete and accurate information on contractors supporting contingency operations may inhibit planning, increase costs, and introduce unnecessary risk, as illustrated in the following examples: (1) Limited visibility over contractors obscures how extensively agencies rely on contractors to support operations and help carry out missions; (2) Without incorporating information on contractors into planning efforts, agencies risk making uninformed programmatic decisions; (3) A lack of accurate financial information on contracts impedes agencies' ability to create realistic budgets; (4) Lack of insight into the contract services being performed increases the risk of paying for duplicative services; and (5) Costs can increase due to a lack of visibility over where contractors are deployed and what government support they are entitled to. DOD, State, and USAID have made progress in implementing the Synchronized Predeployment and Operational Tracker (SPOT). However, as we reported last month, DOD, State, and USAID's on-going implementation of SPOT currently falls short of providing agencies with information that would help facilitate oversight and inform decision making, as well as fulfill statutory requirements. |
Seniors are a heterogeneous group—many do not require assistance with transportation, and, in fact, most drive automobiles. However, according to data from the 2001 National Household Travel Survey conducted by DOT’s Bureau of Transportation Statistics, Federal Highway Administration, and National Highway Traffic Safety Administration, approximately 21 percent (6.8 million) of seniors aged 65 and older do not drive. The percentages are higher among minority populations aged 65 and older: approximately 42 to 45 percent of African-Americans and Asian-Americans do not drive, compared with 16 percent of Caucasians. Approximately 40 percent of Hispanics also do not drive. A person’s driving status is correlated with travel behavior. For example, one study found that drivers aged 75 and older made an average of six trips per week, compared with two trips per week for nondrivers. While some of this difference may be due to individual preferences or to other circumstances, such as an illness that prevents travel, some of the difference may be due to a lack of transportation alternatives. Those seniors with poor health or a disability, or who have a limited income, may face more difficulty finding and accessing transportation. According to data from the 2000 Census, about 37 percent of persons aged 65 and older reported having at least one disability, and about 10 percent were below the federal poverty line. Although not all of these seniors need assistance with transportation, a sizable number are likely to need such assistance. According to senior transportation experts, the “oldest of the old” (those aged 85 and older) are especially likely to be dependent on others for rides, particularly if they are also in poor health. Figure 1 shows some of the factors that affect seniors’ transportation needs. The more of these factors that seniors have, such as a network of family and friends who can drive them and an available public transportation system, the more likely it is that their mobility needs will be met. Transportation assistance is an important issue for all seniors. In 2001, approximately 26 percent of state units on aging surveyed by the Aging States Project identified transportation as a top health issue for older adults, and 38 percent identified inadequate transportation as a barrier to promoting health among older adults. Furthermore, transportation was among the top five information requests to the Eldercare Locator Service in 2001, 2002, and 2003. There is, however, a significant gender gap in the amount of time that seniors can expect to be dependent on alternative sources of transportation. A study published in August 2002 in the American Journal of Public Health estimated that men aged 70 to 74 who stopped driving would be dependent on alternative transportation for an average of 6 years, while women in the same age group can expect to have an average of 10 years’ dependence on alternative transportation modes. Although there is no clear-cut definition of mobility need, the literature and the experts we consulted indicate that there are two main categories of mobility need, both of which are important to seniors: (1) “essential” or “life-sustaining” trips, which include medical trips and trips for employment, shopping, banking, and other necessary errands, and (2) “quality of life” or “life-enhancing” trips, which include recreational or social trips that enable a senior to fully participate and engage in the community, such as trips to concerts, theatre, visits with family members in nursing homes or with friends, religious activities, and volunteer activities. For the purposes of this report, we will use this two-fold definition of types of trips as our working definition of mobility need. Unmet need occurs when assistance from others is needed but is not provided or is inadequate. However, according to the experts we contacted, there is no agreed-upon standard or benchmark for the number of trips that an individual requires to take care of essential activities for living (for both life-sustaining and life-enhancing activities), although experts generally agree that government should be concerned with meeting both types of needs for transportation-disadvantaged seniors. The lack of a standard or benchmark makes it difficult to determine an appropriate way to measure the extent to which mobility needs are being met. Researchers have begun to identify and evaluate transportation-disadvantaged seniors’ unmet mobility needs by comparing the number of trips they make with those of nondisadvantaged populations. In addition, some researchers have used satisfaction ratings to measure seniors’ unmet mobility needs. In the absence of a standard measure of need, we will discuss need and unmet need by comparing the travel of disadvantaged seniors with the travel of nondisadvantaged seniors and by using other measures that federal and local officials have developed. The federal government has traditionally provided some assistance in mobility, mostly for the purpose of accessing other federal program services. Federal agencies partner with local agencies, nonprofit organizations, and others that actually provide transportation services and also contribute their own funds. The federal agency that has a central role in providing all types of services to seniors is HHS’s Administration on Aging (AOA). With a total discretionary budget of more than $1.3 billion, AOA is the official federal agency dedicated to policy development, planning, and the delivery of supportive home and community-based services to older persons and their caregivers. AOA works through a national aging network of 56 state units on aging; 655 AAAs; 241 tribal and native organizations representing 300 American Indian and Alaskan Native tribal organizations, and 2 organizations serving Native Hawaiians; and thousands of service providers, adult day care centers, caregivers, and volunteers. Five federal departments administer 15 programs that are key in addressing mobility issues of transportation-disadvantaged seniors. The programs are “senior-friendly” in that they help make transportation available, accessible, and affordable to seniors. Working with experts and federal agency officials, we identified 15 key programs in five departments that provide senior transportation (see table 1) out of the many federal programs that are used to provide transportation services. Some of these programs specifically target seniors, such as HHS’s Grants for Supportive Services and Senior Centers (Title III-B). Other programs—including DOT’s Nonurbanized Area Formula Program (Section 5311)—target other groups, such as rural populations, of which seniors can be a part. About half of the 15 programs fund transportation for specific types of trips, including for medical services, employment-related activities, and other services (such as nutrition) that the programs provide. The other half of the programs can be used to provide general transportation for any trip purpose. The programs fund a variety of types of services, ranging from transit passes and training in the use of public transit to vehicle purchases or expansion of public transit service. Funds from the 15 programs follow various paths in providing transportation services to seniors (see fig. 2). Many of the programs are block grants or formula programs through which funds are distributed to states on the basis of certain criteria, such as population. State agencies then provide services directly or distribute the funds to local agencies, nonprofit organizations, transit providers, and other organizations. For example, funds from DOT’s Capital Assistance Program for Elderly Persons and Persons with Disabilities (Section 5310) are allotted by formula to state agencies, which then distribute the funds to private nonprofit organizations or local public entities (such as transit providers) to purchase vehicles or other equipment. In another example, funds from HHS’s Grants for Supportive Services and Senior Centers (Title III-B) are distributed first to state units on aging according to the number of seniors residing in the state, and then to local AAAs, which generally contract for services with local transportation providers. In other programs, such as the Department of Labor’s Senior Community Service Employment Program, some funds go through the state while other funds go directly to nonprofit organizations or local service providers. Finally, other programs—such as HHS’s Rural Health Care Outreach Services Program—bypass state agencies altogether and go directly to local entities. Local entities can use funds from a variety of federal programs to provide transportation services to seniors. For example, AAAs can receive funds from the Title III-B program, DOT’s Capital Assistance Program for Elderly Persons and Persons with Disabilities (Section 5310), and other federal programs. The Beverly Foundation, a leading independent research organization on senior transportation issues, has identified the following “5 A’s” of senior-friendly transportation service: availability (service is provided to places seniors want to go at times they want to travel); accessibility (e.g., door-to-door or door-through-door service is provided if needed, vehicles are accessible to people with disabilities, and stops are pedestrian-friendly); acceptability (service is clean, safe, and user-friendly); affordability (financial assistance is provided to those who need it); and adaptability (service is flexible enough to accommodate multiple trip types or specialized equipment). However, there are trade-offs involved in addressing any of the “5 A’s.” For example, improving the acceptability of service can increase the costs of providing service. Our review of federal programs’ authorizing legislation and guidance, as well as interviews with federal program officials, indicates that most of the 15 key federal programs we identified in table 1 are generally designed to make transportation more available, accessible, and affordable to transportation-disadvantaged populations, such as seniors (see table 2). For example, HHS’s Medicaid Program provides transportation that is free or low-cost for seniors. Some of the programs address other attributes of senior-friendly transportation, such as acceptability. For example, the Department of Education’s Independent Living Services for Older Individuals Who Are Blind program can be used to train seniors in the use of the public transit system, making it both more accessible and acceptable to them. In addition to the 15 key programs identified in tables 1 and 2, the federal government helps to make transportation more senior-friendly through other programs and policies that provide or ensure access to transportation services for all disadvantaged populations (including seniors). Although seniors are not the target population of these other programs and policies, they often benefit from them. For example, seniors are eligible for many of the programs we identified in a previous report on the coordination of services for the transportation-disadvantaged. In that report, we identified 62 federal programs that can be used to provide transportation services, including the 15 programs identified above. For instance, seniors can benefit from the Department of Housing and Urban Development’s Community Development Block Grant Program, which can be used to purchase and operate vehicles in low-income areas, and the Department of Labor’s Workforce Investment Act Adult Services Program, which can be used to provide bus tokens or reimbursement for mileage to access training opportunities. Another federal program that does target seniors—Medicare, the federal health financing program covering almost all persons aged 65 and older and certain persons with disabilities—was not included in our list of 15 key programs because it funds only a very specific type of transportation service for seniors. Medicare covers medically necessary ambulance services when other means of transportation, such as a wheelchair van or a taxicab, are inadvisable, given the beneficiary’s medical condition at the time. Medically necessary ambulance trips include both emergency care, such as responses to 911 calls, and nonemergency care, such as transfers from one hospital to another. Medicare covers nonemergency transports—both scheduled and nonscheduled—if the beneficiary is bed-confined or meets other medical necessity criteria, such as requiring oxygen on the way to the destination. Many programs and policies that address the mobility needs of persons with disabilities also benefit seniors. For example, the Americans with Disabilities Act (ADA) has resulted in changes to many transportation-related facilities, including transit vehicles and bus stops, that make transportation more accessible to seniors with disabilities as well as others. Other federal ADA-related activities can also benefit seniors. For example, the Department of Justice’s Civil Rights Division is responsible for enforcing federal statutes, including the ADA, that prohibit discrimination on the basis of race, sex, handicap, religion, and national origin. In addition, Justice has published rules governing the design of transportation facilities, such as bus stops, to make them accessible to people with disabilities. Finally, the U.S. Architectural and Transportation Barriers Compliance Board—an independent entity within the federal government devoted to accessibility for people with disabilities—develops and maintains accessibility standards for transit vehicles, provides technical assistance and training on these standards, and ensures compliance with accessibility standards for federally funded facilities. The data on the nature of mobility needs that we obtained from research publications and interviews with federal officials, experts, and officials from 16 local AAAs indicate that federally supported programs are not meeting some of the mobility needs of transportation-disadvantaged seniors. In particular, (1) seniors who rely on alternative transportation have difficulty making trips for which the automobile is better suited, such as trips that involve carrying packages; (2) life-enhancing needs are less likely to be met than life-sustaining needs; and (3) mobility needs are less likely to be met in nonurban communities (especially rural communities) than in urban communities. However, there are few current or planned efforts to collect data for assessing the extent to which federally supported programs are meeting transportation-disadvantaged seniors’ mobility needs. In addition, AAAs’ methods for collecting and reporting data make it difficult to determine the extent to which transportation-disadvantaged seniors’ needs are being met, in part because of a lack of federal guidance on how to assess needs. According to experts and local officials, barriers to assessing the extent of unmet needs include the lack of consensus on how to define or measure needs, a lack of federal guidance, and the difficulties of measuring the unmet needs of seniors who are not attempting to access publicly funded services. Federally supported transportation services are meeting some, but not all, types of mobility needs of transportation-disadvantaged seniors. Although up to 75 percent of nondrivers aged 75 and older have reported being at least somewhat satisfied with their mobility, evidence from nationally published research and from interviews we conducted with federal officials, experts, and local aging professionals indicates that many of those seniors who are able to meet life-sustaining and life-enhancing needs are doing so because they have access to supportive family and friends who drive them or because they live in transit-rich cities. For those seniors who do not have access to these support structures or who live in nonurban areas, some mobility needs—especially those related to life-enhancing activities—may not be met. Data from nationally published research indicate that transportation-disadvantaged seniors prefer the automobile to other modes of transportation because it is readily available, can reach multiple destinations in the course of one trip, and can be used to access destinations that require carrying packages (such as shopping). In focus groups conducted by AARP, the general consensus among participants was that access to ready transportation provided by the private automobile is critical to overall life satisfaction. In comparison, seniors perceived other modes such as public transit, specialized transportation (such as senior vans), and walking as having inherent negative attributes—including time spent waiting, waits in bad weather, difficulty carrying items, scheduling requirements, infrequent service, and concerns about personal security and accessibility—that made them less attractive than driving or being driven. Consistent with this, a survey conducted by AARP found that senior nondrivers use automobile rides from family or friends more than other modes of transportation to get where they need to go (see fig. 3). Even if seniors could overcome some of these negative perceptions of alternatives to the automobile, they may not be able to use the alternatives because the alternatives might be unavailable in their community or are inaccessible to seniors. In a survey by AARP, about 33 percent of senior nondrivers who reported that they did not use public transportation said that it was because public transportation was not available. In focus groups conducted for the Coordinating Council on Access and Mobility, HHS, and the National Highway Traffic Safety Administration, participants reported having trouble walking long distances, getting to the bus stop, getting on and off buses, and seeing street signs from the bus so that they knew where and when they should disembark. Similarly, more than one-third of the respondents in one study’s focus groups reported that they would be unable to walk one-quarter mile to a bus stop. Data from nationally published research indicate that difficulty in getting the transportation they needed interfered with transportation-disadvantaged seniors’ activities and trip-making, especially for life-enhancing needs such as social or recreational activities. For example, a report analyzing data from the 2001 National Household Travel Survey found that seniors who do not drive made 15 percent fewer trips to the doctor than drivers, but made 65 percent fewer trips for social, family, religious, and other life-enhancing purposes. In addition, although few seniors in an AARP survey reported that a lack of transportation interfered with their activities—such as getting to the doctor, their place of worship, the grocery store or drug store, or entertainment; shopping for clothes or household items; or visiting with friends—nondrivers were two to three times as likely as drivers to report that a lack of transportation interfered with such activities. Furthermore, a study that analyzed responses from seniors in focus groups reported that older adults who have stopped driving significantly curtailed their recreational activities. One participant who had stopped driving reported, “What I do now, my daughter tries to take me shopping once a week for heavy items, which is very helpful. But I’m accustomed to going from mall to mall and store to store to see things, you know, and I don’t get around like that. I’m very limited.” Federal officials and experts we interviewed also said that the available transportation options are not meeting seniors’ mobility needs, especially for life-enhancing trips. Several experts said that, while mobility needs are being met for the majority of seniors who drive—and even for some transportation-disadvantaged seniors who live in transit-rich environments, who have access to supportive family and friends, or who have knowledge of and access to nonprofit or other organizations that provide transportation—the mobility needs generally are not being met for transportation-disadvantaged seniors without these options. Although a few officials and experts said that for most seniors, trips for life-sustaining needs (e.g., medical appointments) are likely being met, others said that such needs are not being met. Finally, the majority of AAA officials we interviewed said that transportation-disadvantaged seniors’ needs were not being met. (Although 3 of the 16 AAAs said that needs were being met with the limited funding available, they also cited gaps in service.) Furthermore, although the AAA officials we interviewed were split in their perspectives on whether needs for travel to critical, life-sustaining activities were being met, nearly all said that needs for travel to life-enhancing activities such as church and shopping at the mall were not being met. In addition, all of the AAAs we interviewed imposed restrictions that limited or prioritized transportation services for life-sustaining activities. For example, many AAAs require advance notification (e.g., 24-hour notification) for service and most restrict service to approximately 9 a.m. to 5 p.m. on weekdays, which limits spontaneous travel and travel in the evenings when many cultural and social events take place. Furthermore, most AAAs offer transportation only within the counties or towns they serve, which limits access to activities. Finally, when we asked AAA officials about the destinations to which they provide transportation, most identified essential, life-sustaining sites, such as nutrition sites, medical facilities, grocery stores, pharmacies, public service agencies, and banks. Only a few AAAs offered transportation for life-enhancing activities, such as for recreational or cultural events, or for visits to spouses or other family or friends in long-term-care facilities, and some explicitly stated that they were unable to provide service for personal or life-enhancing activities. The AAA officials told us that all of these constraints were due to limited funding availability. The travel of transportation-disadvantaged seniors living in nonurban communities is more restricted than the travel of transportation-disadvantaged seniors living in urban communities. A study analyzing 2001 National Household Travel Survey data indicated that older Americans living in small towns and rural areas who do not drive were more likely to stay home on a given day than their urban and suburban counterparts—63 percent of nondrivers in small towns and 60 percent of nondrivers in rural areas reported that they stayed home on a given day, compared with 51 percent of nondrivers living in urban and suburban areas. Alone, these data do not indicate that mobility needs are less likely to be met because of limited transportation options rather than other aspects that distinguish rural communities from urban ones, such as fewer activities and longer distances between destinations. However, data we obtained from other sources support the idea that the lack of transportation is a significant reason for these travel patterns. For example, in focus groups and interviews that AARP conducted in 2001 with seniors aged 75 and older, nondrivers living in the suburbs were less satisfied that their mobility needs were met than urban nondrivers. In addition to identifying feelings of lost freedom, diminished control, and altered self-image, several suburban participants noted that they make fewer trips and pursue fewer activities as nondrivers, whereas the urban nondrivers expressed more satisfaction with their ability to get around. In addition, in a survey by AARP, respondents living in cities reported that they were more likely to have public transportation available to them than respondents living in rural areas (see fig. 4). In addition, several federal officials and experts we interviewed said that the needs of transportation-disadvantaged seniors are not being met with available transportation options, especially for those seniors living in rural communities. Similarly, when we asked AAA officials whether transportation-disadvantaged seniors’ needs were being met, nearly half offered the view that needs were not being met for those living in rural communities because of the long distances required to travel to facilities and the resulting need for the driver to wait to bring the senior back. In addition, some said there are geographic regions in rural areas that are not served at all by public transportation, taxicab, or other transportation providers. Because most of the federal programs that fund transportation for transportation-disadvantaged seniors do not focus specifically on seniors or transportation (instead, seniors may be one of several target populations, and transportation may be one of several supportive services provided by the program), federal agencies have minimal program data about the extent of seniors’ unmet transportation needs. Five of the 15 key federal programs that provide transportation to seniors—the Department of Education’s Independent Living Services for Older Individuals Who Are Blind program and HHS’s Social Services Block Grants, Community Services Block Grant Programs, Grants for Supportive Services and Senior Centers (Title III-B), and Program for American Indian, Alaskan Native, and Native Hawaiian Elders (Title VI)—collect some nonfinancial performance data related to senior transportation. Most of the data collected for these 5 programs provide only information on usage, such as the number of seniors receiving transportation services or the number of one-way trips provided to seniors. In addition, for transit programs that serve the general public, the Federal Transit Administration collects data on the number of rides and the number of people served, but these data are not broken out by federal program or by age. However, AOA officials told us that they are beginning to measure performance outcomes related to transportation services under the Title III-B program. On the basis of a national survey it conducted in 2004, AOA estimated that state and area agencies on aging provided transportation services to approximately 440,000 seniors in fiscal year 2003. AOA officials told us that most of the respondents rated the transportation services as good or excellent, and that many respondents reported that they relied on these services for all or nearly all of their local transportation needs. Although this information is useful in assessing the satisfaction of seniors who receive transportation services, it does not measure the extent of unmet needs. Officials from AOA and the Federal Transit Administration currently are assessing the state of data on seniors’ mobility needs to identify baseline data on needs and available resources. Similarly, few AAAs use, or plan to use, data collection methods that enable them to determine the extent of seniors’ unmet mobility needs—that is, information on both the extent of need in the community and the capacity of services, including their own, to provide transportation to seniors to meet those needs. AAAs are required to determine the extent of need for supportive services (which could include transportation) provided through HHS’s Title III-B program and to evaluate how effectively resources are used to meet such need. However, several AAAs we interviewed reported that they do not collect this type of data at all. Of those AAAs reporting that they do collect data on the extent of unmet needs, most collect data on the number of seniors who called the AAA to request transportation services that the agency was unable to provide (including data such as the number of trip denials and the number of seniors on a waiting list). There are a number of limitations to this type of data. For example, a few AAAs reported that waiting list data were not reliable in measuring the unmet needs of seniors because the data allowed multiple-counting of seniors who are wait-listed by more than one transportation provider or who periodically call for rides and are added to the waiting list each time they call. In addition, AAAs reported that waiting list data were not entirely representative of unmet needs because these data include information only on seniors who call for service and not on seniors who do not call (because no services are available, because they do not know what services are available, because they are tired of being turned down, because they moved to an assisted living facility since they had difficulty obtaining transportation, or because of some other reason) but who may still need rides. Furthermore, the waiting list data do not allow for calculating the number of seniors who were referred to other transportation services and were able to get rides through these other services. Only 2 of the 16 AAAs (the Salt Lake County Aging Services and the Bear River Association of Governments, both in Utah) have a method for determining the gap in transportation service by calculating the difference between the number of seniors who are in need of transportation and the number of seniors who are receiving service through other providers, or through family and friends. Finally, there is little information from national surveys and studies that addresses the extent to which transportation-disadvantaged seniors’ needs are being met; rather, those surveys and studies focus on the nature of needs, as discussed in the previous section of this report. For example, one report prepared by DOT’s Bureau of Transportation Statistics analyzes 2002 data from the Transportation Availability and Use Survey on the travel behavior of persons with disabilities, but the findings are not broken down by age. Another Bureau of Transportation Statistics report analyzing the same data source provides some insights on the types of travel problems encountered by seniors with disabilities, but it does not provide data that can be used to measure the extent of those seniors’ transportation needs or to determine whether those needs are being met. Senior mobility experts told us that there is no clear-cut definition of mobility needs, making it difficult to determine the extent to which such needs are being met. Although many of the experts we contacted mentioned the distinction between life-sustaining and life-enhancing needs, they did not provide a more concrete definition. Many of these experts also said that they were not aware of an agreed-upon standard or benchmark for assessing seniors’ unmet mobility needs. One researcher said that the topic of seniors’ mobility needs is just beginning to be discussed in the literature, so a standard has not yet been developed. In addition to the lack of consensus on definitions or measures of need, there is also little guidance on assessing mobility needs. Although some of the 15 key federal programs we identified require state or local agencies to assess the need for services, federal agencies provide little guidance on how to do this. As previously noted, HHS’s Title III-B and Title VI programs—through which AOA provides grants to states and Native American tribes for senior services—require AAAs to prepare a plan that includes an assessment of the needs of disadvantaged seniors, which could include transportation needs. Furthermore, the Older Americans Act, as amended, requires AOA to provide guidance to states on assessing needs, specifically “to design and implement [for program monitoring purposes]…procedures for collecting information on gaps in services needed by older individuals” and “procedures for the assessment of unmet needs for services....” Although AOA has developed general guidance for Native American tribes on conducting needs assessments for its Title VI program, the program guidance that the agency provides to states for its Title III-B program does not include guidance on how to assess and measure needs or on specific data collection methods. As a result of the lack of guidance on assessing need, most of the AAAs that we interviewed reported assessing seniors’ unmet mobility needs using a range of data collection methods that resulted in data not specific enough for planning purposes, and not indicative of the precise extent to which seniors’ mobility needs are being met. While some AAAs said they did not need additional data, other AAAs we spoke with said that more precise information on the extent of unmet need would be useful in designing services and getting political support and funding for services, but some do not have the staff, funds, or expertise to develop methodologies to do this. They said that guidance from the federal government in this regard would be very useful. Officials at AOA said that, in the past, they have not provided guidance to state and local aging agencies on how to assess needs for the Title III-B program because they received feedback that state and local aging agencies had a more immediate desire for guidance on assessing the quality of service and collecting information on client characteristics. To this end, AOA is currently developing a plan for evaluating the various supportive services, including transportation, provided through its Title III-B programs. The evaluation effort will address the needs of states and communities for supportive services and the extent to which the Title III-B program is meeting the needs and preferences of the elderly for those services. As part of the evaluation, AOA plans to address questions about the role of AAAs in providing supportive services, how needs assessments are performed by state and local entities, and how the results of those assessments are used by states in implementing the Title III-B program. On the basis of the results of our interviews with AAA representatives, the AOA official responsible for the planned evaluation said that it would be useful to obtain some additional information during the evaluation to determine the need for services under the Title III-B program, including (1) identifying how needs should be defined and measured; (2) determining the range of methodologies that AAAs use for assessing seniors’ need for services, including transportation, and unmet needs; and (3) identifying the kinds of guidance that AAAs want from AOA and states to help them perform their required needs assessments. AOA plans to complete its evaluation of this program by January 2006. Other federal program regulations also require or encourage local agencies to assess need to be eligible for funding. For example, DOT’s Capital and Training Assistance Program for Over-the-Road Bus Accessibility (which provides funds to bus operators to help make their services more accessible to persons with disabilities) lists “identified need” as one of the criteria for selecting grantees, and HHS’s Community Services Block Grant Program (which provides funds for services to address the needs of low-income individuals) requires grantees to assess need for services and report this information to the state. However, these agencies do not provide guidance for assessing need for most of these programs. DOT officials said that they allow local applicants for the Capital and Training Assistance Program for Over-the-Road Bus Accessibility to decide what measures to use to demonstrate need, and the measures vary accordingly. For example, some of these applicants have provided information on the number of trips that were denied for lack of an accessible vehicle, while other applicants demonstrate need on the basis of the number of trips provided using an existing lift-equipped vehicle. For its Job Access and Reverse Commute Program, DOT asks applicants to provide data on the percentage of low-income persons in the area as well as on transportation gaps between existing services and employment opportunities for these persons, and the agency provides some guidance on how to identify such gaps. HHS provides some guidance for assessing the need for services under the Community Services Block Grant Programs, but the guidance is for assessing a wide range of services, of which transportation is only one. Federal officials report that it is difficult to measure unmet mobility needs largely because of difficulties in measuring the unmet needs of those transportation-disadvantaged seniors who are not trying to access transportation services (such as those who do not call for service because they have given up trying to get transportation or are not aware of services). Some AAA officials and federal officials said that collecting this type of data is time-consuming and expensive. In addition, there may be other difficulties in reaching these seniors. For example, they may have difficulty hearing questions posed over the telephone, may be wary of providing personal information, or may be reluctant to admit that they need assistance or that they can no longer safely drive themselves to activities they need or want to attend. Transportation providers use a variety of practices—which we have grouped into three categories—to enhance the mobility of transportation-disadvantaged seniors and promote the cost-effective delivery of transportation services. These include practices that (1) improve service efficiency through increasing the use of technology and by coordinating services with other providers in the community; (2) improve customer service by providing training sessions for service staff and seniors, using vehicles that can accommodate seniors’ mobility challenges, and increasing the level of service provided; and (3) leverage existing resources by increasing volunteer involvement and forging financial partnerships with public and private entities in the community. According to the local service providers we interviewed, these practices, which were implemented with some federal support, resulted in more senior-friendly transportation services and more cost-effective service delivery. All 10 local transportation service providers we interviewed indicated that they had been able to use funds from 1 or more of the 15 key federal programs in implementing practices that enhance senior mobility. The most commonly used programs were DOT’s Capital Assistance Program for Elderly Persons and Persons with Disabilities (Section 5310) and HHS’s Title III-B and Medicaid Programs, followed by DOT’s Nonurbanized Area Formula Program (Section 5311), and HHS’s Community Services Block Grant Programs. However, according to the providers we interviewed, certain characteristics of federal programs may impede the implementation of practices that enhance transportation-disadvantaged-seniors’ mobility. According to a 2002 report prepared by DOT’s Transit Cooperative Research Program (hereafter referred to as the TCRP report), local transportation providers have implemented a number of program practices to improve public transportation services for seniors. The 10 local service providers we interviewed in urban and rural areas have implemented some of these practices, as discussed below. Increasing the use of technology: According to the TCRP report, using advanced technology can improve efficiency, productivity, and cost-effectiveness. Global Positioning Systems (GPS) and other advanced technologies can provide real-time information about where vehicles are located, when they will arrive to pick up a senior, and how long the trip may take. Two of the 10 local service providers we interviewed are using advanced technology to improve their trip scheduling. For example, Sweetwater Transportation Authority in Rock Spring, Wyoming, is using GPS technology on board each bus, connecting the bus to software that will automatically schedule rides and provide an accurate estimated time of arrival to passengers. The Friendship Center, which offers door-through-door transportation services in Conroe City, Texas, is involved in the early stages of implementing a computerized dispatching and mapping system that will allow same-day scheduling to transport seniors to their destinations. In the past, all scheduling was done manually and seniors often had to call 48 hours in advance to schedule a ride. According to Friendship Center officials, the implementation of the computerized mapping system will increase efficiency and coordination of their transportation service, which will also improve the level of service provided to seniors. Coordinating transportation services: According to the TCRP report and our previous work, coordination of transportation services can improve the overall efficiency of operations, increase the productivity of services, reduce service costs, and increase mobility. Our previous work indicated that the extent of coordination of transportation services varies. Several service providers we interviewed have implemented a coordinated transportation service, including Mountain Empire Older Citizens (MEOC), which is located in central Virginia. MEOC recognized that coordination was needed because each human service agency in the area was transporting its own clients exclusively, while other vehicles from other agencies were picking up passengers in the same area. Under its coordination contract, MEOC leases vehicles from other specialized transportation service providers and coordinates all aspects of transporting their clients (including other transportation-disadvantaged groups, such as people with developmental disabilities). As a result, MEOC has maximized the efficient use of vehicle fleet and realized cost savings in service delivery, according to an agency official. Another service provider, the Friendship Center, coordinates its transportation services with medical facility staff to schedule medical appointments for seniors. The dispatchers at the center work directly with the medical providers to schedule medical appointments for seniors when the center’s transportation services are available. In addition, the center’s hours for transportation services reflect those of the medical centers. By coordinating their services, the center helps ensure that seniors do not encounter transportation scheduling problems. Lastly, Medical Motor Service, which provides transportation and brokerage services to seniors in Monroe County, New York, coordinates with other nonprofit agencies to provide volunteers who serve as “shopping buddies” to help seniors carry packages or assist them with their groceries. Providing training to staff and seniors: According to the TCRP report and a brochure on innovative transit services for seniors developed by the Beverly Foundation and the Community Transportation Association of America (hereafter, Innovations Brochure), training for service staff—particularly drivers—and for senior riders is important in improving transportation services. The TCRP report states that staff training should address customer service issues, such as the need for polite and courteous interactions by drivers with passengers and the physical constraints seniors encounter while using public transportation. The TCRP report also indicates that customer service training should be part of an overall change in organizational focus, from just operating vehicles to serving customers. Several service providers we interviewed were implementing training to improve customer service by helping seniors feel more comfortable while being transported. For example, Altoona Metro Transportation, which provides public transit service to the general public in central Blair County in Pennsylvania, developed a driver-training sensitivity program through which drivers receive specialized training to recognize the diverse needs of seniors. In what is considered a “hands-on” session, drivers wear special glasses to distort their vision so that they can temporarily experience the physical limitations that some seniors face while riding public transportation. An Altoona Metro official also told us that drivers are encouraged to socialize with senior passengers and foster relationships to make seniors feel comfortable and welcomed. In addition to training for staff, providers are also implementing travel-training programs to teach seniors who are not accustomed to using transit services how to use public transportation. One service provider, North County Lifeline, Inc. (a curb-to-curb transit service located in the northern San Diego area), developed a travel-training program for seniors to learn about public transit and reduce any concerns they may have about personal safety when using transit. The program includes instruction in how to problem-solve, map out a trip, make transfers, and understand the rights and responsibilities they have while riding public transportation. Using vehicles that can accommodate seniors’ mobility challenges: Using vehicles that accommodate the mobility challenges of seniors—such as purchasing low-floor buses, equipping vehicles with lifts, or modifying vehicles to make them identifiable and visually appealing (by using buses with distinctive colors to designate specific routes or with large see-through windows)—may help address some of the physical challenges (such as difficulties boarding a bus or van) and emotional challenges (such as concerns about boarding the wrong bus or personal safety) that seniors may face while using public transportation. For example, the TCRP report states that low-floor buses provide advantages over conventional buses because they shorten the distance between the first step on the bus and the curb (e.g., the first step on a conventional bus is approximately 9 to 12 inches above the curb, whereas the first step on the latest low-floor buses is less than 3 inches above the curb). However, there may be constraints in using such buses—one service provider we interviewed found them impractical for the provider’s service area, which contains hilly terrain and many narrow streets. The majority of service providers we interviewed use lift-equipped vehicles to transport seniors who use wheelchairs. Several of the service providers are also using vehicles that are easily identifiable and visually appealing to further address concerns seniors may have about using public transportation. For example, several of the service providers we interviewed said that they transport seniors in vehicles that are color-coded to designate specific routes or that have large, nontinted windows to limit the confusion that seniors face while trying to determine which bus to board, to provide a sense of personal security, and to “demystify” public transportation for seniors. Increasing level of service: According to the TCRP report, increasing overall service levels is vital to meeting the mobility needs of a growing senior population. Some of the local service providers we interviewed said that the practices they implemented allowed them to improve their services by expanding service hours for life-sustaining trips (as much as their funding allows), accommodating all requests as they arise (even if that means temporarily modifying a route), and expanding services to include life-enhancing trips (e.g., field trips sponsored by senior centers and trips to a therapeutic warm-water pool program). For example, a MEOC official told us that the provider expanded its service from 8 hours to 12 hours per day on weekdays to provide transportation for life-sustaining trips (e.g., medical appointments), and that the agency plans to modify an existing route to provide service regardless of how little notice is given. MEOC’s computer scheduling system enables dispatchers to radio the nearest driver and ask him or her to modify the current route to fit in an extra pick-up or drop-off. In another example, Gold Country Telecare, a nonprofit agency that provides accessible specialized transportation in rural northern California, learned through interviews with others in the local community involved in senior transportation that seniors were often isolated on weekends, when transportation services were rarely available for them. To address this need, the agency increased its service level by implementing an all-day Sunday transportation service for seniors to get to church or other activities, such as grocery shopping. Increasing volunteer involvement: According to the TCRP report and the Innovations Brochure, volunteer involvement may lead to cost savings in delivering transportation services to seniors by reducing the need for paid staff. The local service providers we interviewed used volunteers in a variety of ways. For example, Gold Country Telecare implemented a volunteer driving program under which volunteers are reimbursed for mileage expenses incurred in using their personal vehicles to transport seniors to medical and health treatment facilities located in a nearby urban center. According to a Gold Country Telecare official, this program allows seniors to participate in health therapies or medical services not found in their rural community. OATS, Inc., a transportation service provider in Missouri, uses volunteers who act as dispatchers, taking calls in their homes from people in the community who need trips. The volunteers transfer requests to the driver, who then schedules the trips. The use of volunteers allows OATS to provide more cost-effective and more frequent service by avoiding the administrative expense of having an office in each of the 87 counties it serves. Furthermore, according to an OATS official, the value of the volunteer hours (including the in-kind allowance for the use of their personal telephones and space in their home) translates into approximately $1.6 million in cost savings per year. Forging partnerships with private and public entities: The TCRP report suggests forging financial partnerships with public and private entities in the community to address funding concerns and to diversify funding sources. Several of the local service providers we interviewed developed private/public partnerships such as (1) contracts with private entities to engage in revenue-enhancing activities, such as using the service providers’ vehicles to transport other groups when the vehicles were not being used for senior transportation or transporting seniors to specific locations, such as shopping sites, or (2) joint agreements with human service agencies to provide specialized services for clients who need additional assistance. For example, the Friendship Center contracts with private entities to provide shuttle services from employee parking to employment sites, from overflow parking lots to special event venues, to community churches on Sunday mornings, and other similar transportation services. According to center officials, these additional contracts for shuttle services bring in approximately $140,000 in additional annual revenue, which is being used to provide additional senior transportation services and represents approximately 15 percent of the center’s annual budget for senior transportation. Another local service provider that diversified its funding sources, Medical Motor Service, developed a partnership with a regional private supermarket to supplement its fund-raising efforts. Under this arrangement, Medical Motor Service receives approximately $300,000 in annual funding from the supermarket to transport seniors to and from the grocery store. This sum represents 18 percent of the provider’s annual senior transportation budget. As a result of this arrangement, seniors residing in 55 housing complexes have transportation for grocery shopping or for renewing medical prescriptions at any of the 14 supermarkets located in Monroe County. However, one trade-off in having an exclusive partnership with one grocery store chain is that, unlike seniors (and others) who can drive, seniors who rely on such a service do not have a choice of where to shop. In that regard, Special Transit, a local service provider in Boulder, Colorado, identified a need to diversify its funding sources to reduce dependence on any one source of funds, helping to ensure continuity of service for all of its clients, including seniors. To do so, it hired an outreach coordinator to identify other service providers in the community (such as senior day care programs, senior centers, and local hospitals) that were interested in having Special Transit provide transportation services. In addition, the coordinator was tasked with identifying opportunities for generating private donations. Through its partnerships, Special Transit reduced its dependence on public funding (including federal and local government grants and matching funds) from more than 80 percent of its total revenue sources in the mid-1980s to approximately 65 percent in 2004. Presently, Special Transit’s service contracts and private donations account for approximately 30 percent of its total revenues. Table 3 provides examples of some of the practices and federal funding sources used by the local service providers we interviewed. The implementation of these practices contributed to the improvement of senior transportation services by making them more senior-friendly, according to the 10 local service providers we interviewed. In particular, these practices collectively addressed the five A’s of senior-friendly transportation previously discussed—availability, accessibility, acceptability, affordability, and adaptability—as follows: The majority of service providers told us that they made transportation services readily available for seniors to get to needed medical locations. The 10 providers said that their services are tailored to ensure that seniors can access the vehicles: that is, pick-up locations are easy for seniors to walk to, one-on-one escort service is available to seniors who need special assistance, or lift equipment is installed in the vehicles. Several service providers stated that they use vehicles that are identifiable and visually pleasing to make sure their vehicles are acceptable to seniors. Most of the service providers also indicated that their services are affordable because they are free to seniors or minimal donations are requested at the time of service. More than half of the service providers said that their services are adaptable and flexible enough to accommodate the service requests and the mobility limitations some seniors may have. In addition, the majority of the service providers we interviewed said that their organizations realized cost savings and increased the quality and quantity of service by implementing the practices. For example, as previously noted, the coordinated transportation service implemented by MEOC allowed lower per-unit costs, which also resulted in cost savings for all the agencies involved. According to a MEOC official, the cost savings allowed MEOC to increase the number of trips provided, increase the hours of operation, continue to afford dispatchers, hire more transportation managers, and provide adequate training for drivers—all of which translated into improvements in the quantity and quality of service to MEOC’s clients. According to the service providers we interviewed, the most common way in which federal programs support the implementation of practices that enhance transportation-disadvantaged seniors’ mobility is by providing funding. As previously noted, the 10 providers we interviewed use funds from at least 1 of the 15 key federal programs in implementing practices that enhance transportation-disadvantaged seniors’ mobility. (See table 3 for the federal funding sources associated with each service provider.) We found that DOT’s Capital Assistance Program for Elderly Persons and Persons with Disabilities (Section 5310) and HHS’s Grants for Supportive Services and Senior Centers (Title III-B) and Medicaid Programs are the federal programs most often used by the 10 providers we interviewed, followed by DOT’s Nonurbanized Area Formula Program (Section 5311) and HHS’s Community Services Block Grant Programs. According to some of the service providers, the federal programs had both a direct and an indirect role in providing technical assistance for the implementation of practices to enhance transportation-disadvantaged seniors’ mobility. In some cases, federal programs provided direct technical assistance (by providing information on how to apply for program funding or how to implement the service or by providing contact information for other resources) through program representatives or through the program’s Web site. Several providers stated that, as grantees, they obtained technical assistance from DOT’s Intelligent Transportation Systems (ITS) program, which assigned consultants to their organizations to provide assistance in selecting software and hardware and developing requests for proposals. One service provider further added that he found DOT’s ITS program Web site to be useful in obtaining information on best practices and on other technology-related resources. Another service provider received technical assistance through both Federal Transit Administration representatives and the state’s transit association on how to obtain funding through the Job Access and Reverse Commute Program. In other cases, some providers stated that the federal government indirectly provided guidance or technical assistance. For example, guidance on implementing practices and marketing services to the senior community was provided through federally funded professional organizations, such as the Community Transportation Association of America and the National Academy of Sciences’ Transportation Research Board. Other service providers we interviewed told us that the federal programs did not provide assistance (other than funding) or guidance on implementing practices to enhance transportation-disadvantaged seniors’ mobility, so they had to look to other state and regional transit agencies or other local transportation service providers to provide guidance or technical assistance. One service provider said that it researched and sought out other mobility management programs and travel-training programs to learn how to implement such programs, because this information was not available from federal or state agencies. Several providers told us that finding information on successful practices for enhancing transportation-disadvantaged seniors’ mobility required considerable staff time and other resources, and that a centralized source—particularly a Web-based source—for such information would be useful. Many of the providers suggested that providing such a Web site would be an appropriate role for the federal government. AOA, the lead federal agency for coordinating programs for seniors and the dissemination of information relevant to seniors, has some transportation information available on its Web site, but there are some limitations to this information, as discussed in more detail in the section below on senior mobility obstacles and strategies. According to the local providers we interviewed, certain characteristics of federal programs can impede the implementation of practices that enhance transportation-disadvantaged seniors’ mobility. Although federal programs provide financial support for practices that enhance senior mobility, an expert in senior mobility and several service providers stated that receiving federal funds entails burdensome reporting requirements. Often, the local service providers receive funding from several federal programs with different reporting requirements and therefore have to submit several different reports calling for different data. One provider stated that submitting all of the required documentation for DOT’s Capital Assistance Program for Elderly Persons and Persons with Disabilities (Section 5310) and HHS’ Grants for Supportive Services and Senior Centers (Title III-B) Program necessitated the dedication of 720 administrative hours each year (equivalent to over $10,000), costing the provider more in administrative costs than the actual funding received through the federal programs. Another service provider we interviewed said it has designated about 1,690 administrative hours annually to complying with the reporting requirements of the Title III-B program, Medicaid, and DOT’s Congestion Mitigation and Air Quality Improvement Program, including doing such tasks as tracking the different data requested by each program, organizing documents, and following up on required information. The provider noted that the 1,690 hours (equivalent to about $60,000 in costs) represented a significant portion (14 percent) of the total federal program funding received under those programs. Federal officials have told us that the Coordinating Council on Access and Mobility—a federal body, consisting of representatives from 10 federal agencies, including the Departments of Education, Labor, Health and Human Services, Transportation, and Veterans Affairs, that is charged with coordinating transportation services provided by federal programs and promoting the maximum feasible coordination at the state and local levels—is examining possible ways to streamline reporting requirements of the various federal programs that fund transportation for disadvantaged populations. Council officials said that a paper addressing this issue will be developed and presented in 2004 or early 2005. Some of the local providers said that federal guidance on how to apply for funding and comply with reporting procedures is limited. For example, one service provider stated that it has not received technical guidance from DOT that explains the funding process for the Urbanized Area Formula Program (Section 5307). Instead, the provider contacted other local nonprofit organizations to seek their technical assistance in understanding the funding process, but the funds were delayed in the meantime. The provider said that it contacted local DOT representatives but was unable to determine the cause of the delay in funds. As a result, the provider said that it had to convince its nonprofit board of directors to continue to provide services without the promised federal funds so that seniors would still have transportation services available. Lastly, several of the service providers perceive that program guidelines are rigid and lack flexibility, although the federal officials we contacted disagreed with the providers’ interpretations. For example, one provider stated that the program guidelines for DOT’s Nonurbanized Area Formula Program (Section 5311) are very rigid in that the funds may only be used for transportation for the general public. The service provider stated that the Section 5311 guidelines require it to track the type of passenger who requests demand-response service and the trip destination. If a senior requests transportation to a senior center or any other human service program destination, the service provider told us it must find another funding source (e.g., Title III-B) for that trip because Section 5311 funding is designated for general rural transportation services and not specialized services. However, a DOT official told us that rural transit providers receiving Section 5311 assistance may transport a senior to a senior center if the service is also made available to the general public. Through a review of the literature and interviews with experts on senior transportation and aging, representatives of pertinent professional associations and advocacy groups, local officials, and transportation service providers, we identified several obstacles to addressing transportation-disadvantaged seniors’ mobility needs and potential strategies that the federal government and other government levels, as appropriate, can consider taking to better address those needs and enhance the cost-effectiveness of the services delivered. We grouped these obstacles and strategies around three themes: (1) planning for alternatives to driving as seniors age to extend the lifespan of their mobility, (2) accommodating seniors’ varied mobility needs, and (3) leveraging federal and other government funding to better use limited resources. The suggested strategies for addressing obstacles to senior mobility involve certain trade-offs, and these obstacles, strategies, and trade-offs are discussed in each of the following sections. As the senior population doubles over the next 25 years, it will become increasingly important to target resources to the areas of greatest need and to know whether current methods and programs are working to reduce transportation-disadvantaged seniors’ unmet needs and improve their mobility and access to services. The 655 local area agencies on aging that are required to gather data to assess seniors’ needs for services could serve as valuable sources of information for federal agencies to use in program planning, evaluation, and resource allocation. However, without guidance from the Department of Health and Human Services’ Administration on Aging on assessing needs for services, including transportation, these local agencies are using a variety of methods—some less comprehensive than others—to assess seniors’ mobility needs. As a result, it is not possible to determine whether current programs are reducing unmet needs and improving transportation-disadvantaged seniors’ mobility and access to services. The Administration on Aging is now embarking on a comprehensive assessment of seniors’ needs for services that affords a good opportunity for the administration to help state and local agencies conduct and use the results of improved needs assessments. The experiences of other federal agencies, such as the Department of Transportation, that have developed guidance for assessing or demonstrating needs for some of the programs they administer, such as the Job Access and Reverse Commute Program, could be useful in designing guidance for area agencies on aging to assess needs. The Coordinating Council on Access and Mobility is uniquely positioned to provide a forum for such a coordinated effort because all of the federal agencies that administer the key programs we identified are members, and many of these agencies are involved in the council’s efforts to improve mobility for all transportation-disadvantaged populations. As the agency designated by the Older Americans Act as the lead for gathering information on seniors’ needs for services, and as one of the original members of the council, the Administration on Aging is well-situated to lead a coordinated effort to design guidance for assessing seniors’ needs. Not having information on alternatives to driving is an obstacle to both seniors and service providers. Without such information, seniors do not plan for a time when they can no longer drive, and providers waste time and money “reinventing the wheel” and become frustrated with federal programs. Some federal efforts, such as the community awareness pilot project implemented by the Department of Transportation’s National Highway Traffic Safety Administration, have already begun to address this obstacle, but the expected growth in the senior population will require broader efforts. As service providers and representatives from the advocacy groups and professional associations we interviewed said, an important role for the federal government would be to provide a central forum for comprehensive information on transportation services, perhaps through a centralized Web site that could enhance seniors’ awareness of available services and improve providers’ ability to serve them. Such a Web site would also be useful for publicizing activities the various federal agencies are undertaking to improve transportation-disadvantaged seniors’ mobility. Although the Administration on Aging (the federal focal point and advocacy agency for seniors) has a Web site with information on transportation services, most of this information is aimed at service providers rather than at seniors or their caregivers. Furthermore, many of the service providers and representatives from advocacy groups and professional organizations we interviewed did not seem to be aware of the presence of such information on the administration’s Web site. In addition, although seniors are increasingly comfortable using the Internet, there are still many who do not have access to, or are not at ease with, such technology. To help enhance transportation-disadvantaged seniors’ mobility by improving available information and guidance, we recommend that the Secretary of Health and Human Services direct the Administrator, Administration on Aging, to take the following four actions: To improve the value and consistency of information obtained from area agencies on aging on the extent to which transportation-disadvantaged seniors’ mobility needs are being met, the Administrator should develop guidance for assessing such needs by doing the following: Expand the scope of work in the administration’s planned evaluation of the Grants for Supportive Services and Senior Centers (Title III-B) program to include gathering and analyzing information on (1) definitions and measures of need; (2) the range of methodologies that area agencies on aging use for assessing seniors’ need for services, including transportation, and unmet needs; (3) leading practices identified in the needs assessments methodologies used by area agencies on aging; and (4) the kinds of guidance that area agencies on aging want from the administration and the states to help them perform their required needs assessments. Use the results of the administration’s evaluation of the Title III-B program, and input from the Coordinating Council on Access and Mobility of other federal agencies that fund transportation services for seniors, to develop and disseminate guidance to assist state and local agencies on (1) methods of assessing seniors’ mobility needs and (2) the suggested or preferred method for collecting information on gaps in transportation services. To help address the obstacles that seniors, their caregivers, and service providers face in locating information on available services and promising practices, the Administrator should do the following: Take the lead in developing a plan—in consultation with members of the Coordinating Council on Access and Mobility—for publicizing the administration’s Web site and Eldercare Locator Service as central forums for sharing information on senior transportation through workshops, annual meetings, and other outreach opportunities with seniors, their caregivers, and service providers. The plan should include steps for reaching out to seniors and providers who do not use or have access to the Internet to increase awareness of information available in hard copy or other format. Work with members of the Coordinating Council on Access and Mobility to consolidate information about services provided through the participating agencies’ programs and to establish links from their programs’ Web sites to the administration’s transportation Web site to help ensure that other agencies (such as local transit agencies) are aware of, and have access to, such information. We provided the Departments of Education, Health and Human Services, Labor, Transportation, and Veterans Affairs with draft copies of this report for their review and comment. The Departments of Health and Human Services, Transportation, and Veterans Affairs agreed with the findings and conclusions in the report. The Department of Transportation also provided technical clarifications, which were incorporated as appropriate. The Department of Health and Human Services provided written comments on the draft of this report, which are presented in appendix IV. The department concurred with our recommendations. The Departments of Education and Labor said that they did not have any comments on the draft. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees and to the Secretaries and other appropriate officials of the Departments of Education, Health and Human Services, Labor, Transportation, and Veterans Affairs. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at siggerudk@gao.gov or at (202) 512-2834. Additional GAO contacts and staff acknowledgments are listed in appendix V. The scope of this report is limited to a review of the mobility needs of transportation-disadvantaged seniors, who we define as those who cannot drive or have limited their driving and who have an income constraint, disability, or medical condition that limits their ability to travel. In addition, because federal, state, and local programs have different age ranges for seniors (e.g., aged 55 and over, aged 65 and over), we do not use the term “senior” in this report to mean any specific age. We obtained statistics presented in the introduction and background of this report about seniors and their mobility from an article published in the American Journal of Public Health, the 2000 Census, the Aging States Project, and the Eldercare Locator Service; because this information is included as background only, we did not assess its reliability. To identify federal programs that address transportation-disadvantaged seniors’ mobility issues, we asked experts who had participated in a senior mobility forum we moderated in July 2003 to identify those federal programs that they consider key for providing transportation services to seniors who cannot drive or have limited their driving. We verified the resulting list of 15 programs with federal program officials. To assess the extent to which the 15 federal programs address each of the five A’s of senior-friendly transportation (as identified by the Beverly Foundation), we reviewed program legislation and guidance and interviewed federal officials and senior mobility experts. We also reviewed prior GAO reports on the coordination of transportation services for disadvantaged populations and interviewed federal officials, senior mobility experts, and other stakeholders to identify additional ways in which the federal government addresses transportation-disadvantaged seniors’ mobility challenges. To identify data that could tell us anything about the extent to which transportation-disadvantaged seniors’ needs are being met, we reviewed the literature on transportation, disability, and aging found in statistical databases and on agency, academic, and advocacy Web sites. In addition, we asked experts—including academics conducting research in the fields of aging, disability, and transportation; advocacy policy analysts knowledgeable about senior transportation; and federal officials responsible for senior transportation programs—to identify sources of data and relevant studies. We included in our review only nationwide surveys or focus groups (1) that were conducted in multiple states or types of communities, (2) that were conducted after 1995, (3) that had variables that analyzed transportation behavior of individuals aged 65 and older, and (4) that were reported in published or soon-to-be-published journals or reports. Also, we identified federal agency performance indicators and other data collected by federal agencies that have key transportation programs for seniors. For the performance indicators and data sources we identified, we assessed the extent to which they provided meaningful information about the extent to which seniors’ mobility needs are being met. To assess the reliability of research publications, we reviewed the studies’ overall designs and methodologies, including the selection processes for any participants, response rates, and measures used. A social science analyst at GAO was involved in each review of methodological soundness. Table 4 summarizes the limitations of the data sources we used in assessing the extent to which seniors’ mobility needs were being met. To better understand the variety of methodologies that area agencies on aging (AAA) used to assess seniors’ unmet mobility needs, the reliability of data collected using these methodologies, the barriers to quantifying unmet mobility needs, and the perspectives of local officials on the extent to which seniors’ mobility needs are being met, we conducted semistructured interviews with officials from 15 of the 655 AAAs nationwide and 1 state unit on aging. To select the nonprobability sample of 15 AAAs that we interviewed, we asked the 42 state units on aging that have AAAs in their states (8 states—Alaska, Delaware, Nevada, New Hampshire, North Dakota, Rhode Island, South Dakota, and Wyoming—and the District of Columbia do not have AAAs and instead the state unit on aging is the single planning and service area under the Older Americans Act) to identify 1 urban, 1 rural, and 1 suburban AAA in their state, and for each, to identify the method by which the AAA collects data on seniors’ unmet mobility needs. Of the 42 states that have AAAs, 30 responded to our request. From these responses, we selected AAAs to ensure geographic dispersion (West, South, Northeast, and Midwest); representation of AAAs with different population density (urban, rural, and suburban); representation of different data collection methods (survey, focus group, census, or other); representation of input from community stakeholders (service providers, caregivers, seniors, and professionals); and representation of states with higher-than-average and lower-than-average percentages of seniors in their population. In addition to selecting 3 AAAs from each of 4 states—1 in the West, 1 in the South, 1 in the Midwest, and 1 in the Northeast—we also selected 3 AAAs in New York State because it had recently completed an audit of transportation for seniors that included an evaluation of AAAs’ procedures for conducting needs assessments. We also interviewed the state unit on aging from 1 of the 8 states that do not have AAAs (South Dakota). Using a semistructured interview, we asked senior-level management and staff that had responsibilities for assessing seniors’ unmet mobility needs at each of the AAAs (and 1 state unit on aging) to provide information on transportation services offered and restrictions to service; on their processes for collecting data on seniors’ unmet mobility needs, including information about how they ensure the reliability of the data they collect and their methodology for reporting and maintaining the data; on their perspectives on the extent to which seniors’ mobility needs are being met; and on the additional data that should be collected, if any. To assess the reliability of the data obtained from AAAs, we reviewed the data for obvious errors in accuracy and completeness and interviewed agency officials knowledgeable about the data. Specifically, we asked whether any tests were conducted to ensure that data were entered accurately and whether the quality of the collected data had been reviewed. In addition, we asked AAAs to identify limitations of the data and actions taken to correct any limitations. (See table 4 for information about limitations of the AAA data.) To obtain the perspectives of experts on the extent to which needs are being met, possible barriers to determining the extent of unmet mobility needs, and their knowledge of any standards or benchmarks developed for assessing seniors’ unmet mobility needs, we interviewed federal agency officials that have responsibilities for senior transportation programs in the Departments of Education, Health and Human Services, Labor, Transportation, and Veterans Affairs, as well as representatives from research organizations, advocacy organizations, and academic institutions in the fields of aging, disability, and transportation (see table 5). We asked these experts to identify potential sources for data and information on seniors’ mobility needs as well as for their perspectives on the extent to which such needs are being met. To identify practices that can enhance transportation-disadvantaged seniors’ mobility and local service providers that have implemented such practices, we interviewed experts and federal officials and reviewed the literature on senior mobility. We then contacted these local service providers and requested further information about the practices they employed and the funding sources they used to implement the practices. To learn about the practices and their results, obstacles to implementing the practices, and the role of federal programs in supporting them, we conducted semistructured interviews with officials from 10 of the 29 local transportation service providers that responded to our initial request for information. These 10 providers represented a nonprobability sample, chosen to include a diversity of geographic areas (i.e., 5 were in urban areas and 5 were in nonurban areas, from different regions of the country); types of practices (such as use of technology and coordination); and federal funding sources (to get representation of as many of the 15 key federal programs as possible and to include both providers that used many federal funding sources and those that used only one or two). To determine the extent to which federal programs support practices that enhance transportation-disadvantaged seniors’ mobility, we interviewed federal program officials, senior mobility experts, and local service providers and reviewed pertinent GAO reports. To identify examples of obstacles to addressing transportation- disadvantaged seniors’ mobility needs and strategies the federal government could consider taking to improve the ability of federal programs to meet these seniors’ mobility needs and enhance the cost- effectiveness of the services delivered, we reviewed literature on transportation, disability, and aging and interviewed experts, professional associations, and advocacy groups (see table 6). We also interviewed federal officials and officials from the 16 AAAs and 10 local transportation service providers previously mentioned. We organized the obstacles and strategies identified in the literature and through our interviews into three categories: planning for alternatives to driving as seniors age, accommodating seniors’ varied mobility needs, and addressing federal and other governmental funding constraints. We presented the proposed strategies to federal program officials to obtain their comments on the potential trade-offs associated with implementing them. The trade-offs were included in the discussion on obstacles and suggested strategies. We conducted our work from November 2003 through August 2004 in accordance with generally accepted government auditing standards. Service restrictions (age, day/hours, distance, number of trips) Escort, fixed-route, and demand-responsive transportation is provided to grocery stores, medical appointments, nursing homes for spousal visits, congregate meal sites, senior centers for general nonmeal activities, hospitals for spousal visits, and provider agencies (such as the Social Security Administration) Service restrictions (age, day/hours, distance, number of trips) Restrictions on when service may be available, depending on distance; medical destinations are prioritized; may have to wait for trips other than medical Services for fixed route and demand response have restrictions on distance—for the most part within county boundaries (except for some medical services) Aged 60 and older Most transportation is limited to 5 days per week, 8 a.m. to 5 p.m. One county is more rural and limits transportation to three cities Varies by county (senior centers cannot accommodate everyone due to limited funds); providers tend to prioritize trips (medical appointments/pharmacy and food shopping are higher priorities) Service restrictions (age, day/hours, distance, number of trips) Most providers restrict service at least within the county Number of trips depends only on scheduling and availability (most providers operate on a first-come, first-served basis) Fixed route to senior centers Assisted door to door for trips to doctor appointments, grocery stores, and recreational activities (funded by seniors) Type of practice (as described by the providers and in the literature) Coordinates transportation service with “zero trip denial” policy and uses dedicated funding through state lottery program. Provides fixed-route service using dedicated funding from the state lottery program, targets marketing efforts to increase senior ridership, offers a driver sensitivity training program, and uses senior volunteers to promote and teach seniors how to ride fixed-route service through the “bus-buddy” program. Provides free rides for seniors throughout an eight-rural-county service area with a 48-hour call ahead using volunteers from the Retired Senior Volunteer Program. Provides senior volunteer companions for homebound seniors through Senior Companions Program. Provides demand-response transportation service with volunteer drivers to transport seniors to medical appointments, grocery stores, pharmacies, senior centers, or other errands. Provides free fixed-route service to seniors. Also provides free transportation to groups of 20 or more seniors during off-peak hours (late evening or weekends) to destinations within the service area (e.g., Senior Games, Senior Proms, Senior Nursing Home Games, Retired Senior Service Volunteer Program luncheons, and AARP events). Implemented a volunteer-based transit ambassador program that allows a volunteer, who knows the local transit systems, to assist and provide information to other passengers or people using public transit for the first time. The ambassador program is available to all passengers. However, seniors often take advantage of the program to learn how to ride fixed-route services in Napa, CA. Type of practice (as described by the providers and in the literature) Provides specialized coordinated transportation services for medically fragile, disabled, and elderly to locations such as medical offices, hospitals, and other key destinations. Coordinates transportation services with consumer advocates, social service agencies, government offices, and transportation providers to best meet their clients’ needs. Secures transportation funding, takes telephone calls, schedules and assigns trips with subcontractors, provides rides, and reimburses providers. Implemented a medical advocacy program that uses local volunteers to assist elders with medical transportation and advocacy. Program is targeted to all elders and spouses and to working and long-distance caregivers. Implemented a mileage reimbursement program through which seniors find volunteer drivers who use their private vehicles to transport seniors to medical appointments, grocery shopping, church, or other recreational activities. The program was modeled after the Transportation Reimbursement and Information Program, which is listed below. Coordinates with medical facility staff to schedule senior medical appointments to match with transportation availability and is involved in business enterprises with others in the community to generate additional program revenue. The implementation of a computerized mapping system to schedule same-day services is slated for the near future. Provides low- or no-cost transportation to low- income seniors and persons with disabilities located in rural communities to healthcare services, provides all-day Sunday service for seniors to go to church and other activities, and offers a volunteer driver program through which volunteers who use their own vehicles to transport seniors are reimbursed for mileage. Implemented a travel-training program through which volunteers teach seniors how to use public transportation. Type of practice (as described by the providers and in the literature) Offers a range of demand-responsive services (door-to-door, door-through-door, and hands-on assistance) to a broad spectrum of older riders using automobiles driven by both paid staff and volunteer drivers. Operates exclusively on a combination of fares and donations and does not depend on public subsidies. Customers (seniors) become “members” of Independent Transportation Network and prepay (through a variety of payment plans) into their own account in advance of travel. Provides demand-response transportation services to seniors for grocery shopping, medical appointments, banking, daily nutrition, senior center activities, and other general travel trips. Provides transportation and brokerage services by coordinating with other nonprofit agencies. Services are customized to meet the needs of seniors, using wheelchair accessible vehicles and providing shuttle services to rural areas of the county. Contracts with a private, regional grocery chain to supplement its fund-raising efforts. The grocery store contributes to Medical Motors in exchange for Medical Motors transporting seniors to the grocery store. Provides transit services to the general public and door-through-door, one-on-one services to special-needs populations in a multicounty region through a coordinated system that is also consumer friendly and flexible to meet the needs of the community. Targets a travel-training program to the senior population to encourage seniors to use the public transit system by teaching (one-on-one or through groups) and showing seniors how to use the system. Helped establish the Strides Web site, designed as a distribution center for other public transportation service providers as well as a referral service for seniors to learn about transit services in the San Diego area. Provides transportation service for the general public, prioritizing its services on senior citizens and persons with disabilities within 87 rural counties in the state of Missouri. Uses volunteers to fulfill a number of functions, such as dispatching calls to drivers, fund-raising, and serving as liaisons to the community. Type of practice (as described by the providers and in the literature) Provides flexible transportation services for trips to senior centers, shopping, banking, and medical appointments. Drivers use pagers for efficient pick-up service. Night and weekend trips are available. Transports older adults and persons with disabilities to medical facilities, grocery stores, meal sites, and adult day centers and for other personal needs. Uses volunteers to provide door-through-door medical transportation services to seniors. Services are free to seniors. Provides a variety of services, including demand-response, curb-to-curb transportation service offered to the general public; a circular shuttle route serving the entire community that is also senior friendly; a “family and friends” mileage reimbursement program; and a comprehensive, one-on-one training program developed to teach seniors how to use their community transit alternatives. Coordinates its services with local transit authority and taxicab services. Provides a driver- training program that emphasizes safety and customer service. Uses brightly-decorated vehicles to attract senior ridership. Helps provide vehicles and funding to local communities in the service area. Local communities that receive the vehicles and funding design and operate services independently according to local needs. Provides coordinated demand-response transportation services using computerized scheduling. The computerized scheduling software will allow accurate and on-time scheduling through the use of Global Positioning Systems technology that tracks the location of vehicles. Provides senior transportation services 7 days a week and serves approximately 40 designated senior nutrition and social center sites. Also implemented a community bus program that circulates within a specific community to encompass shopping areas, senior residences, and senior day programs. Type of practice (as described by the providers and in the literature) Reimburses volunteer drivers to transport individuals where no transit service exists or when the individual is too frail to use other transportation. Operates a demand-response service for seniors who need transportation services to medical facilities. Also works with local senior centers to provide transportation services. Provides free transportation using volunteers, who use their private vehicles to transport seniors to medical appointments, shopping, and errands. In addition to the individuals above, Bert Japikse, Jessica Lucas-Judy, Kristen Sullivan Massey, Sara Ann Moessbauer, Elizabeth Roberto, and Maria Romero made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” | The U.S. population is aging, and access to transportation, via automobile or other modes, is critical to helping individuals remain independent as they age. Various federal programs provide funding for transportation services for "transportation-disadvantaged" seniors--those who cannot drive or have limited their driving and who have an income constraint, disability, or medical condition that limits their ability to travel. For those transportation-disadvantaged seniors, GAO was asked to identify (1) federal programs that address their mobility issues, (2) the extent to which these programs meet their mobility needs, (3) program practices that enhance their mobility and the cost-effectiveness of service delivery, and (4) obstacles to addressing their mobility needs and strategies for overcoming those obstacles. Five federal departments--including the Department of Health and Human Services (HHS)--administer 15 programs that are key to addressing the mobility issues of transportation-disadvantaged seniors. These programs help make transportation available, affordable, and accessible to seniors, such as by providing transit passes or reimbursement for mileage. National data indicate that some types of needs are not being met, including those for trips (1) to multiple destinations or for purposes that involve carrying packages; (2) to life-enhancing activities, such as cultural events; and (3) in rural and suburban areas. However, there are limited data available to assess the extent of unmet needs. HHS's Administration on Aging is required by law to provide guidance to states on how to assess seniors' need for services, but officials said the administration has not done so because it has focused on providing other types of guidance. As a result, the local agencies on aging we interviewed--which are ultimately responsible for performing such needs assessments--used inconsistent methods to assess seniors' mobility needs. The Administration on Aging plans to conduct an evaluation of one of its major programs and thus has an opportunity to improve its understanding of seniors' needs and provide guidance to local agencies on performing needs assessments. Local transportation service providers have implemented a variety of practices--including increasing service efficiency, improving customer service, and leveraging available funds--that enhance mobility and the cost-effective delivery of services. Federal programs provide funding and some technical assistance for these practices, but several service providers we interviewed said that the implementation of such practices was impeded by limited federal guidance and information on successful practices. Senior mobility experts and stakeholders identified several obstacles to addressing transportation-disadvantaged seniors' mobility needs, potential strategies that federal and other government entities can consider taking to better meet these needs, and trade-offs associated with those strategies. |
After major disasters, various federal agencies provide a range of assistance to individual victims; state, territorial, and local governments; and nongovernmental entities. This assistance is administered through various federal programs, and is generally made available after the President issues a disaster declaration under the authority of the Robert T. Stafford Disaster Relief and Emergency Assistance Act (the Stafford Act). While the federal government provides significant financial assistance after major disasters, the federal role is primarily to assist state and local governments, which have the central role in recovery efforts. State and local governments have the main responsibility of applying for, receiving, and implementing federal assistance. Further, they make decisions about what priorities and projects the community will undertake for recovery. The Stafford Act also specifies that federal agencies providing financial assistance after a major disaster cannot provide assistance to an individual for the same loss for which another federal program or private insurance company has provided compensation. Therefore, homeowners that sustained damages from the 2005 Gulf Coast hurricanes must first seek assistance from their homeowner insurance or National Flood Insurance Program policies. Homeowners in the Gulf Coast area had varying levels of hazard and flood insurance coverage. Of the 331,070 homeowner units that sustained minor, major, or severe damage in Louisiana, 126,007 (38 percent) had hazard insurance only, and 118,928 (36 percent) had both hazard and flood insurance. Of the 157,914 homeowner units that sustained damage in Mississippi, 94,792 (60 percent) had hazard insurance only, and 11,481 (7 percent) had hazard and flood insurance. Insurance industry estimates indicate that homeowners in Louisiana and Mississippi received average payments of nearly $16,000 in each state for personal property claims related to Hurricane Katrina and average payments of $13,000 and $3,500, respectively, for personal property claims related to Hurricane Rita. For both Hurricanes Katrina and Rita, our analysis shows that $9.7 billion was paid out through the National Flood Insurance Program to homeowners in Louisiana and Mississippi, with a median claim payment of $74,000. Less is known about the extent to which rental property owners had hazard and flood insurance coverage and the amounts paid out in claims after the 2005 hurricanes because data are not readily available. In addition, data are not available to determine the extent to which insurance settlements made to both homeowners and rental property owners addressed their damages. Through executive orders, the authority to provide disaster relief assistance has been delegated to FEMA. FEMA provides various forms of temporary housing assistance, such as direct financial assistance or a temporary housing unit after a disaster, typically for a period no longer than 18 months, as directed by the Stafford Act. HUD is the recognized federal authority for housing assistance (including permanent housing) and has provided assistance such as rental housing vouchers and grants for federally declared major disasters in the past and prior to FEMA’s creation in 1979. Over the years, Congress has provided several mechanisms for disaster assistance, including HUD’s CDBG program funds for recovery. After the 2005 hurricanes, HUD was responsible for providing assistance to clients that it had already been assisting and for providing CDBG funds. Through the Small Business Act as amended, SBA has the authority to provide home and business loans to repair or replace damaged or destroyed real estate not fully covered by insurance. According to FEMA’s National Disaster Housing Strategy, which was issued in January 2009, throughout the Hurricane Katrina response, responsibilities and roles that had seemed clear in previous events became less clear as FEMA and other federal departments and agencies provided increasing levels of support to state and local officials. For example, FEMA typically does not provide housing assistance for more than 18 months, and generally does not lead efforts to coordinate and deliver permanent housing assistance. However, in response to the 2005 Gulf Coast hurricanes, FEMA led the coordination with states and local communities and implemented many of the housing options, including permanent housing. According to a congressional study on deficiencies in post-disaster housing assistance, there was a lack of clarity in the long- term post-disaster housing-related responsibilities of HUD and FEMA, and concerns have been raised regarding HUD’s limited housing role after Hurricane Katrina and its role in future disasters. The National Disaster Housing Strategy states that HUD is uniquely positioned to assist those affected by a disaster and will be given lead responsibility for permanent housing when such assistance is needed in the future. After the 2005 Gulf Coast hurricanes, a variety of federal programs was made available to homeowners and rental property owners for the repair or replacement of permanent housing (see table 1). Four federal agencies have responsibility for these programs: DHS, HUD, SBA, and Treasury. DHS administers three different grant programs that can be used to repair or replace disaster-damaged housing and mitigate damages after disasters. HUD provides funding for two grant programs, including the CDBG program, through which states can develop post-disaster programs that benefit both homeowners and renters. The CDBG program has often been relied upon as a convenient source of flexible funding that can be applied to disaster situations to help states rebuild their communities. The SBA provides two different types of loans for homeowners and owners of residential rental properties. Both loan products can fund the repair or replacement of disaster-damaged properties. Finally, Treasury has responsibility for three programs that provide additional tax incentives to the states affected by Hurricanes Katrina and Rita to encourage both housing and economic development. Congress established the Gulf Opportunity Zone Act of 2005 to provide tax incentives to individuals and businesses in certain presidentially declared disaster areas. In contrast with grant programs, where funds come directly from the government, GO Zone incentives provide investors with relief from certain tax liabilities. Affordable housing challenges existed for both homeowners and renters in Louisiana and renters in Mississippi before the 2005 Gulf Coast hurricanes, particularly in the areas most damaged by these storms. According to HUD, the generally accepted definition of “affordable” is for a household to pay no more than 30 percent of its income on housing. Families who pay more than 30 percent of their annual income for housing are considered cost burdened. Like renters nationwide, renters in Louisiana and Mississippi were generally more cost burdened prior to Hurricanes Katrina and Rita than homeowners. For example, according to the 2004 American Community Survey, for the areas most damaged by the hurricanes, in the New Orleans metropolitan area (St. Charles, Orleans, St. Tammany, St. Bernard, and Plaquemines parishes) 48 percent of renters and 24 percent of homeowners spent 30 percent or more of their income on housing costs, compared with 50 percent of renters and 21 percent of homeowners statewide. In the Gulfport-Biloxi-Pascagoula metropolitan area in Mississippi—which includes Hancock, Harrison, and Jackson counties—48 percent of renters and 21 percent of homeowners spent 30 percent or more of their income on housing costs compared to 50 percent of the renters and 24 percent of homeowners statewide (see fig. 1). Hurricanes Katrina and Rita increased the need for affordable housing in both Louisiana and Mississippi. For example, of the 82,000 rental units that were damaged or destroyed in Louisiana, about 54,000 were affordable to individuals earning less than 80 percent of the area median income, according to state officials. Similarly, in Mississippi nearly one-fourth of the 25,000 affordable rental units in three Mississippi coastal counties were damaged, with Hancock and Harrison counties sustaining the most damage to their affordable rental housing stock. After Hurricanes Katrina and Rita, federal assistance for the repair or replacement of permanent housing was made available to homeowners and rental property owners in three forms: grants, loans, and tax incentives. The largest source of assistance was the CDBG program. The majority of federal assistance was administered by the states, which designed the programs and used their discretion to prioritize beneficiaries. These programs made assistance available to applicants at different times, depending on the structure and requirements of each program. Ten federal programs that we reviewed provided grants, loans, and tax incentives after the 2005 Gulf Coast hurricanes for the repair or replacement of housing. Grants were made available through five different programs to assist with the repair of disaster-damaged housing; to fund hazard mitigation projects, such as the elevation of housing; and to repair or replace public housing or other housing owned by a public housing agency (PHA). Loans were made available through three programs (including one program that also provides grants) for the repair of disaster-damaged housing for homeowners and renters. Finally, three different tax incentive programs were made available to encourage the redevelopment of housing in the GO Zone for both homeowners and renters. While all of these programs could potentially be used to repair, replace, or develop housing structures, some could also be used for other activities, such as economic development. Of the programs we reviewed, two were available to homeowners only— the Individual and Households Program (IHP): Repair or Replacement Assistance and the Home Disaster Loan Program (see table 2). Several programs could have potentially served either homeowners or renters, including the CDBG program, the Hazard Mitigation Grant Program (HMGP), and some tax incentives. Finally, four of the programs we reviewed could have assisted renters by funding or providing incentives for the repair or replacement of rental housing. Some programs made assistance available for housing-related activities only. For example, Home Disaster Loans, IHP Repair or Replacement Assistance, and GO Zone Low-Income Housing Tax Credits (LIHTC) could only be used for housing-related activities. However, most of the programs we reviewed could be used for other activities as well. For example, CDBG funds could be used flexibly by states and were made available for economic development, infrastructure, historic preservation, and demolition. Similarly, GO Zone Private Activity Bonds could be used for the development of private facilities, such as hotels and retail facilities. Until recently, vouchers were made available to disaster victims to subsidize rents in existing housing as a temporary source of housing assistance. According to some housing experts, vouchers, specifically Housing Choice Vouchers, which are permanent, should have been provided to disaster victims, especially low-income renters, more quickly after the Gulf Coast hurricanes. Congress first made Housing Choice Vouchers available for families affected by the hurricanes in September 2008. State agencies administered the majority of federal assistance available for the repair or replacement of permanent housing, including nearly $19 billion in CDBG disaster relief recovery funds, over $13 billion in tax incentives, and nearly $2 billion in HMGP funds (see fig. 2). Louisiana and Mississippi created new state offices to design and administer the programs funded with the supplemental CDBG funds, including housing programs. In Louisiana, the Louisiana Recovery Authority was created and charged with establishing spending priorities and policies related to the state’s use of the supplemental CDBG funds. In addition, a Disaster Recovery Unit was created within the state’s Office of Community Development, which has managed the state’s CDBG program over the past two decades, to administer the funds. In Mississippi, the Governor’s Office of Recovery and Renewal was established and given primary responsibility for designing housing recovery programs funded with supplemental CDBG funds. The Mississippi Development Authority’s Disaster Recovery Division was responsible for managing Mississippi’s share of CDBG disaster relief funds. State agencies were responsible for administering two of the three tax incentive programs that we reviewed. For the GO Zone Private Activity Bond and LIHTC Programs, states were authorized to allocate additional tax-exempt bond financing and low-income housing tax credits. Each eligible state was responsible for setting up an application process and selecting qualified projects to receive allocations up to each state’s allocation authority under the GO Zone Act. As we previously reported, Louisiana and Mississippi generally allocated the GO Zone bond provisions on a first-come, first-served basis, and did not consistently target the allocations to assist recovery in the most damaged areas, although Louisiana did set aside some of its allocation authority for the most damaged parishes. In contrast, in allocating funds for the GO Zone LIHTC program, the state housing finance agencies in Louisiana and Mississippi gave priority to the GO Zone counties with the most hurricane-related damage. State agencies also administered FEMA’s HMGP. According to FEMA officials, state agencies in Louisiana and Mississippi accepted applications from local jurisdictions for the funds, and forwarded applications to FEMA for review and funding. State mitigation plans document the state’s priorities for the use of HMGP funds, and states are required to update an administrative plan for implementing HMGP funds after every disaster. Plans for the use of HMGP funds should include mitigation activities that are cost effective, environmentally sound, and either statewide or property specific. Federal agencies directly administered six of the sources of post-disaster housing assistance we reviewed, which accounted for approximately $5 billion in available funds and $1 billion in available tax incentives (see fig. 2). For example, the Community Development Financial Institutions Fund within Treasury administers the New Markets Tax Credit program, which competitively allocated tax credit authority—the amount of investment for which investors can claim a 39 percent tax credit over 7 years—to Community Development Entities. FEMA administers both the IHP Repair or Replacement Assistance and the Public Assistance for Permanent Work programs. For the IHP Repair or Replacement Assistance Program, FEMA reviewed applications and awarded funds to homeowners for losses that were not covered by insurance. Through the Public Assistance for Permanent Work Program, FEMA reviewed applications for assistance from PHAs and could award assistance to PHAs for damages to PHA-owned rental housing that was not funded with HUD funds (i.e., public housing could not be funded). HUD was responsible for awarding Capital Fund/Emergency Natural Disaster funds to PHAs on a first-come, first-served basis for the repair or replacement of a public housing development damaged as a result of a natural disaster. PHAs that experienced an emergency situation or a natural disaster were eligible to apply for and receive funds from the reserve provided that they complied with certain requirements. For example, according to HUD’s Grant Handbook, funds provided because of a disaster were only available to the extent that needed repairs were in excess of payments from insurance claims and other federal sources, such as FEMA funds for disaster-related emergency work (but not permanent work). SBA was responsible for administering the Home Disaster and Physical Disaster Business Loan Programs. SBA reviewed applications for assistance and provided loans to eligible applicants. (See app. V for additional information about these programs.) Congress provided states broad discretion and flexibility in deciding how to allocate CDBG funds and for what purposes. The CDBG program is the federal government’s most widely available source of financial assistance to support state- and local government-directed neighborhood revitalization, housing rehabilitation, and economic development activities. Congress provided states with supplemental CDBG funding to help them recover from the Gulf Coast hurricanes, beginning in December 2005. To provide the states additional flexibility in delivering disaster relief, many of the statutory and regulatory provisions governing the use of the funds were waived or modified. HUD issued guidance in February 2006 stating that the funds should be used toward unmet housing needs in areas of concentrated distress. In addition, in June 2006 Congress required states to use at least $1 billion for the repair, rehabilitation, and reconstruction of affordable rental housing, including public and other HUD-assisted housing. This requirement was intended to ensure that states were not only investing in homeownership but also in the housing needs of all affected residents. To make CDBG funds available for the repair or replacement of permanent housing, both Louisiana and Mississippi created new programs for homeowners and small rental property owners (owners of rental properties with up to four units). As we recently reported, Louisiana created the Road Home Homeowner Program, through which funds were made available to homeowners to rebuild homes on their own property, sell their properties and relocate within the state, or sell their homes and relocate outside the state. Mississippi created the Homeowner Assistance Program for homeowners that sustained flood damage. The first round of funding was limited to homeowners that did not have flood insurance because they were located outside of a federally designated flood zone. (See app. II for additional information about these programs.) Louisiana also created the Road Home Small Rental Property Program for owners of small rental properties in the most damaged parishes and made forgivable loans available in two funding rounds. Property owners had to independently finance needed repairs and rent out their units to income- eligible tenants. Once the units were ready for occupancy, the state would conduct inspections and authorize the disbursement of the loan. In December 2008 the state announced an additional option for program participants, designed to provide up-front financing. According to program administrators, this option was created to increase the production of rental housing with CDBG funds and to accelerate the distribution of funds to small rental property owners. According to program administrators, as of November 2009, 1,024 property owners had agreed to participate in this option. Mississippi created the Small Rental Assistance Program for owners of small rental properties in four counties (Hancock, Harrison, Jackson, and Pearl River) and made forgivable loans available in two funding rounds. The program was designed to offer four types of assistance: (1) rental income subsidy, (2) repair or reconstruction of a Katrina-damaged property, (3) reconstruction or conversion reimbursement of a non- Katrina-damaged property, or (4) new construction reimbursement. The state of Mississippi generally disbursed loans in two installments, half when the property owner provided a building permit and the remainder when the property owner provided a certificate of occupancy. In addition, Mississippi used CDBG funds to address the need for workforce housing and public housing. Mississippi created a Long Term Workforce Housing Program to provide grants and loans to local units of government, nonprofits, and for-profit organizations to provide long-term affordable housing in Hancock, Harrison, Jackson, and Pearl River counties. This program was designed to benefit households that earned 120 percent of area median income or less. The program could be used to develop or repair housing for homeowners or renters. The state also designated CDBG funds for the repair or replacement of public housing units that were damaged by Hurricane Katrina. Using these funds, the state created a Public Housing Program to make grants available to five PHAs that sustained damage. Generally, the federal and state administrators of programs other than CDBG that we reviewed used existing processes to make post-disaster housing assistance available to homeowners and renters. For example, FEMA used its existing, but streamlined, processes to make IHP Repair or Replacement Assistance and Public Assistance for Permanent Work available to eligible homeowners and PHAs, respectively. FEMA accepted applications for IHP Repair or Replacement Assistance via phone and the Internet. Applicants who were awarded housing assistance, but who had remaining unmet housing needs because damages exceeded the maximum award, were referred to SBA for a disaster home loan application. Similarly, SBA used its existing processes to make Home Disaster and Physical Disaster Business Loans available to eligible homeowners. Consistent with its existing processes, SBA made loan applications available to applicants after they registered with FEMA and used its existing loan underwriting criteria to evaluate loan applications. The administrators of these programs did not create new programs to make post-disaster housing assistance available. As we have previously reported, SBA encountered challenges processing the large volume of applications after Hurricanes Katrina and Rita but has since taken steps to more effectively process large increases in application volume. Similarly, state agencies generally used existing procedures to award GO Zone LIHTCs and HMGP funds. Specifically, in both Louisiana and Mississippi, the state housing finance agencies announced the availability of the additional credits through qualified allocation plans, reviewed and scored the applications received, and awarded the credits to the highest scoring applicants. Likewise, the state administrators of HMGP funds in Louisiana stated they did not make changes to their normal application processes after Hurricanes Katrina and Rita. In contrast, Mississippi changed its HMGP application process by developing a Web-based system to accept applications. This system allowed applicants to submit a pre- application for HMGP funds online. Federal disaster assistance is generally authorized after a disaster declaration. Thus after Hurricanes Katrina and Rita were declared as disasters, HMGP funds, IHP Repair or Replacement Assistance, and Public Assistance for Permanent Work were made available from FEMA and Home Disaster Loans and Physical Disaster Business Loans were made available from SBA. HUD’s Capital Fund Emergency/Natural Disaster Funding does not require a presidential disaster declaration to become available; this program was available to PHAs that were affected by the hurricanes. CDBG-funded assistance for homeowners and small rental property owners did not become available until HUD accepted the program designs. Each state had to submit an Action Plan to HUD detailing the plans for the uses of its supplemental CDBG funds, and each had to submit amendments to these plans for substantial changes. HUD accepted Louisiana’s Action Plan for the Road Home Homeowner and Small Rental Assistance Programs in May 2006, and the state began accepting applications for the homeowner program in August 2006 and the small rental program in January 2007 (see fig. 3). According to an administrator of Louisiana’s CDBG-funded programs, the homeowner program was initiated first because homeowners lost real property, homeowners are less transient than renters, and the state perceived rental property owners as having other federal resources for recovery, such as tax credits and SBA loans. HUD accepted Mississippi’s Action Plan for the Homeowner Assistance Program in April 2006 and the Small Rental Assistance Program in July 2007. The state began accepting applications for the homeowner and small rental programs in April 2006 and September 2007, respectively. According to officials from the Mississippi Governor’s office, one of the reasons that the homeowner program was implemented first was because there were more homeowners than renters in the coastal counties. In addition, according to state officials, it was possible for the state to implement a compensation program for homeowners more quickly, because funds could be provided directly to homeowners without the requirement for environmental review assessments. In contrast, the officials stated that programs that are established as construction programs, like their Small Rental Assistance Program, trigger environmental review assessments, which take time to address. Delays in the availability of CDBG funding for homeowners and renters will be discussed later in this report. Federal programs we reviewed addressed the repair and replacement needs of more homeowner units than rental units. In both states, more homeowner units were damaged than rental units, but the proportional damage to the rental stock was generally greater. A comparison of the number of units damaged to the number of units funded shows that federal assistance addressed the repair and replacement needs of about 62 percent of damaged homeowner units and about 18 percent of damaged rental units. The difference in the level of assistance for homeowner and rental units was largely due to states’ decisions to allocate most of their CDBG funds to programs for homeowners. States used their broad discretion under CDBG to decide what proportion of the funds went to homeowners and rental property owners. Of the rental units that have received funding, a limited number have been completed, and data are generally not available for the completion status of homeowner units. As a result of Hurricanes Katrina and Rita, an estimated 489,000 homeowner and 247,000 rental units sustained minor, major, or severe damage in Louisiana and Mississippi. Specifically, in Louisiana an estimated 331,070 homeowner units and 184,179 rental units sustained damage, and in Mississippi 157,914 homeowner units and 62,470 rental units sustained damage. Most of the damage was concentrated in specific areas in Louisiana and Mississippi. In eight parishes in Louisiana (Calcasieu, Cameron, Jefferson, Orleans, Plaquemines, St. Bernard, St. Tammany, and Vermillion), an estimated 220,225 homeowner units and 139,249 rental units sustained damage. In Mississippi, an estimated 60,344 homeowner units and 33,964 rental units were damaged in three counties (Hancock, Harrison, and Jackson) (see fig. 4). A greater number of homeowner units were damaged compared to rental units in both Louisiana and Mississippi. However, in Louisiana a greater proportion of rental units were damaged. For example, in Louisiana, 35 percent of the rental housing stock sustained damage, compared with 29 percent of the homeowner stock. In Mississippi, the proportions of damaged homeowner and rental units were more similar: 22 percent of the rental housing stock damaged, compared to 21 percent of the homeowner stock. In the three counties in Mississippi that sustained the most damage, 80 percent of the rental stock and 64 percent of the homeowner stock sustained damage. In the eight Louisiana parishes with the most damage, 66 percent of the rental stock and 63 percent of the homeowner stock was damaged (see fig. 5). While available damage estimates indicate the need for housing in Louisiana and Mississippi in the immediate period after Hurricanes Katrina and Rita, the housing markets in these states have changed over the past several years as a result of displacement and other demographic changes. For example, a recent study noted that while the New Orleans metropolitan area is home to a population that is equal to about 90 percent of the pre-Katrina households receiving mail, school enrollment across the metropolitan area has slowed, suggesting that the population that has returned is different than the pre-storm population. A November 2008 HUD report states that while the demand for affordable rental units is less than before Katrina, it is difficult to assess how much this population will grow—and therefore difficult to determine the demand for affordable rental housing. For the programs we reviewed for which data were available, federal assistance was provided to repair or replace more homeowner units than rental units in Louisiana and Mississippi (see table 3). Specifically, federal programs provided assistance to about 303,000 homeowner units compared to over 43,000 rental units. CDBG, IHP Repair or Replacement Assistance, and the Home Disaster Loan Program were the key federal sources of assistance for homeowners. Homeowners can be eligible to receive assistance from multiple federal programs, and we identified about 115,000 units that received funding from two or more programs we reviewed. Of the programs that provided assistance to rental property owners, GO Zone LIHTC funded the largest number of rental units (about 23,000). When the estimated number of funded units is compared to the estimated number of damaged units in Louisiana and Mississippi, we found that federal programs funded about 62 percent of the estimated number of the damaged homeowner units and about 18 percent of the estimated number of damaged rental units in both states combined (see fig. 6). In Louisiana, federal assistance from the programs we reviewed funded about 65 percent of the damaged homeowner units and 15 percent of the damaged rental units, while in Mississippi federal assistance funded about 56 percent and 26 percent of homeowner and rental units, respectively. While the Housing Choice Voucher Program was not in our scope, this program is the federal government’s major program for assisting very low- income families, the elderly, and the disabled to afford decent, safe, and sanitary housing in the private market. For fiscal years 2008 through 2009 Congress appropriated $185 million to fund various types of vouchers for areas impacted by Hurricanes Katrina and Rita and for families that were assisted under the Disaster Housing Assistance Program. These funds would support over 20,000 vouchers. The difference in the levels of assistance to homeowner and rental units is reflected in the amounts of funding awarded. Although the proportional damage to rental units was greater, more federal dollars were awarded for homeowner units through the programs we reviewed. Specifically, federal and state agencies awarded around $13 billion for homeowner units and around $1.8 billion for rental units, with the majority of funding awarded through the CDBG program (see table 4). Of the CDBG funds that Louisiana and Mississippi awarded for housing-related activities, the majority was awarded through homeowner programs. According to state officials, both states created homeowner programs first because more homeowner units were damaged than rental units. In addition, Louisiana officials stated that CDBG funds were not intended to assist rental property owners with their business investments, and the state did not want to duplicate FEMA’s efforts in assisting displaced renters. Mississippi officials stated that many homeowners that sustained flood damage were not located in a flood zone because the federal government did not accurately identify flood zones, and as a result, these homeowners did not have flood insurance through the National Flood Insurance Program. Mississippi officials further stated that by providing CDBG funds to these homeowners (excluding those sustaining wind damage), the state helped the homeowners with the greatest need. A limited number of rental units have been completed through programs we reviewed that provided assistance for the repair or replacement of rental housing. Progress in the completion of CDBG-funded rental units has been limited. For example, through the CDBG-funded small rental housing programs, 14 percent of the 10,115 rental units funded in Louisiana and 25 percent of the 4,242 rental units funded in Mississippi were completed as of July and August 2009, respectively. Progress in the completion of rental units funded with GO Zone LIHTCs has also been limited. For example, approximately 36 percent of the 13,888 rental units funded in Louisiana and 51 percent of the 9,252 units funded in Mississippi were complete as of June 2009. Units funded with GO Zone LIHTCs are required to be placed in service by January 2011; otherwise the credits cannot be used. Information on the extent to which rental units were funded and completed through the other programs we reviewed can be found in appendix VII. The construction status of individual homeowner units was generally not readily available for the programs we reviewed. For example, according to administrators of the CDBG-funded homeowner programs in Louisiana and Mississippi, the states are not required to track the completion status of units funded because the programs provide compensation grants. Similarly, administrators of the Home Disaster Loan, IHP Repair or Replacement Assistance, and GO Zone Tax-Exempt Private Activity Bond programs do not track the completion status of homeowner units. Both homeowners and rental property owners have faced challenges in applying for and using federal assistance. These challenges include gaps in financing needed to complete repairs and delays in the availability of funds. Homeowners and rental property owners have also faced adverse economic conditions, including high insurance premiums and construction costs and tightening credit markets. These challenges have contributed to the slow pace of recovery in the Gulf Coast region. Options for addressing these challenges include changing the allocation of assistance between homeowners and rental property owners, improving guidance intended to help states in designing programs, and reconsidering which programs are used to deliver permanent-housing assistance after a disaster. Some homeowners in Louisiana and Mississippi did not receive enough funding from insurance and federal assistance to complete repairs to their homes after the 2005 Gulf Coast hurricanes, and some were ineligible for key sources of federal assistance. A review of Louisiana’s Road Home Homeowner Program found that some homeowners received insufficient grant amounts to repair the damage caused by Hurricanes Katrina and Rita, and were challenged from resulting funding gaps. These gaps in funding may have resulted from Louisiana’s decision to use pre-storm home values to determine grant amounts (as opposed to using the cost of repairs), incorrect grant calculations, and increasing construction and insurance costs, according to various researchers. In Mississippi, the state reduced the amount available under its homeowner program when it decided to dedicate $600 million from this program to a Port Restoration Program. While this change appears to reduce the amount of CDBG funds available for homeowners, according to state officials, all eligible applications for the homeowner program were funded. Program administrators in Louisiana stated that their CDBG-funded homeowner program was not intended to make homeowners “whole.” Other program requirements likely resulted in challenges to homeowners in obtaining program funds. For example, in Mississippi, homeowners with wind-only damage were ineligible for the first phase of the Homeowner Assistance Program. In addition, for the first funding round, only homeowners that sustained flood damage (in selected counties) were eligible for Mississippi’s Homeowner Assistance Program. According to state officials, this program was intended to assist homeowners that lacked flood insurance because they were not in a federally designated flood zone. State officials stated they targeted homeowners that were outside of a flood zone due to concerns about the reliability of FEMA’s flood zone designations. In addition, both states required homeowners to prove they had clear property titles. According to state officials, researchers, and organizations that worked with disaster victims, many Louisiana and Mississippi homeowners with damaged properties faced considerable difficulty in establishing clear title because their properties had been informally passed down through generations. Owners of rental properties also faced challenges in obtaining program financing due in part to decisions by the states of Louisiana and Mississippi to set aside a small portion of their supplemental CDBG funds for the repair of rental housing. While Louisiana and Mississippi allocated nearly $11 billion of their CDBG funds to homeowner programs, they targeted fewer funds (approximately $1 billion) to the owners of small rental properties. In Louisiana, demand for the Road Home Small Rental Property Program was seven or eight times what the funding would support, according to state officials. Mississippi officials said they initially had twice the demand for their small rental program than expected, representing 10,000 rental housing units. Availability of financing has also been a challenge for developers of larger rental housing developments, including recipients of GO Zone LIHTCs. Some recipients of GO Zone LIHTCs have encountered financing gaps due to the declining value of the credits. More specifically, some developers were receiving less equity from investors in exchange for the tax credits awarded, which has resulted in large financing gaps and made some planned developments not feasible. In some cases, state housing finance agencies pulled GO Zone LIHTCs back from projects that could not secure additional funds to finance the project, or accepted the credits back from developers, and awarded the credits to other projects. PHAs have also faced limited available funding to make repairs to public housing that was damaged by Hurricanes Katrina and Rita. Public housing agencies have faced considerable challenges in obtaining funding for the recovery of public housing units. Public housing is an important source of affordable housing for low-income households in the Gulf Coast region. The Gulf Coast states experienced a decline in the number of available units as a result of the storms, especially in the New Orleans area. For example, prior to Hurricane Katrina, the Housing Authority of New Orleans managed over 7,000 units of public housing in 10 different developments. Hurricane Katrina damaged about 80 percent of these units (approximately 5,600 units). Less than $30 million was available in 2005 for damage to all PHAs nationwide through HUD’s Capital Fund Emergency/Natural Disaster Funding Program. HUD acknowledged that these funds were not sufficient to repair the public housing that was damaged by the 2005 Gulf Coast hurricanes. In the wake of the 2005 storms, both homeowners and owners of rental properties faced significant challenges in receiving assistance from key federal programs as quickly as possible. For instance, homeowners in Louisiana waited a year for the Road Home Homeowner program, the state’s CDBG-funded homeowner program, to begin accepting applications, and then encountered application processing times that were a median of 245 days. Although there were no specific time requirements for how quickly CDBG-funded programs should be implemented, federal disaster policy, as described in the Stafford Act, states that disaster funds and special measures must help to expedite the reconstruction and rehabilitation of devastated areas. In January 2009, FEMA issued a National Disaster Housing Strategy that included six goals related to post- disaster housing. One of the goals states that housing assistance should help individuals and households in returning to self-sufficiency as quickly as possible, including obtaining permanent housing. As required by the supplemental appropriations acts, states submitted plans to HUD detailing their proposed use of CDBG funds and the design of their programs. After acceptance of the plans, each state issued descriptions of their programs, including eligibility requirements and application deadlines. As shown in figure 7, Mississippi opened its homeowner program to applicants 8 months after Hurricane Katrina, and Louisiana opened its program 1 year after the storm. Owners of small rental properties in Louisiana and Mississippi faced longer delays than homeowners in the availability of CDBG funds to repair or replace properties. Louisiana began accepting applications for its Road Home Small Rental Property Program 17 months after Hurricane Katrina and Mississippi began accepting applications for its Small Rental Assistance Program about 2 years after the storm. As indicated in our prior work, delays in the initial dates that states began accepting applications for their CDBG-funded homeowner and small rental housing programs may be due in part to the states’ lack of staffing capacity to suddenly manage CDBG programs of such unprecedented size. In addition, as previously noted, both states decided to implement programs for homeowners first, which delayed the initial dates states began accepting applications for the small rental housing programs. Furthermore, according to Mississippi officials, it took time to obtain HUD’s acceptance of plans for its Small Rental Assistance Program, in part because the state had to reach agreement with other entities, including state historic preservation offices, regarding the potential impacts of this program on the environment and other concerns. After submitting their applications for CDBG funds, some homeowners and small rental property owners faced significant delays in the processing of their applications, with rental property owners typically facing longer delays than homeowners. For example, homeowners in Louisiana and Mississippi waited a median of 245 and 240 days, respectively, from the date their application was received by the state until the date that they received grant funds. These processing times ranged from 32 to 948 days in Louisiana and from to 70 to 945 days in Mississippi (see fig. 8). For the small rental housing programs, successful applicants had a closing, at which point the terms by which forgivable loans would be disbursed to the owners were agreed upon. Small rental property owners in Louisiana and Mississippi waited a median of 494 and 405 days, respectively, from the date that their application was received by the state until the date of closing. This processing time ranged from 280 to 686 days in Louisiana and from to 75 to 799 days in Mississippi. Several factors may have contributed to delays in the processing of applications for each state’s homeowner and small rental housing programs, including program design changes, a lack of specific performance goals, and complex application processes. Program design changes. Louisiana’s homeowner and small rental programs may have experienced delays in processing applications because of changes to the programs’ design after they were first established. For example, HUD ordered the state to cease operations for the homeowner program in March 2007. According to an e-mail from the HUD Assistant Secretary to key HUD staff involved in Gulf Coast recovery, there was an “apparent inconsistency” between Road Home Homeowner program operations and the approved action plan. While the state continued to accept and process homeowner applications after they were ordered to cease and desist the program, individual covenants had to be revised, homeowner grant awards had to be recalculated, and scheduled house closings were postponed. The state redesigned the program to provide lump-sum compensation grants, and HUD accepted the new design in May 2007. Also, in 2009 Louisiana created an additional option for property owners that had applied to the small rental property program but had not been able to secure up-front financing. Under the new option, the state will pay “advances” of funds to some rental property owners through a housing rehabilitation program. Absence of performance goals. Neither Louisiana nor Mississippi initially established specific performance goals for processing homeowner or small rental program applications. According to state officials from both states, immediately after the storms it was difficult to determine what such performance goals should be. While each state’s initial plans for its homeowner programs indicated that applications should be processed in a timely manner or a manner that recognizes the urgency of the need for assistance, the plans did not quantify goals for processing complete applications. In March 2007 Louisiana established performance indicators to encourage the timely processing of homeowner grants and established similar performance indicators for the small rental program in April 2008. There were no documented performance goals for processing applications for Mississippi’s Homeowner Assistance or Small Rental Assistance Programs until March 2008. Complexity of application process. According to both researchers and organizations that assisted disaster victims, the application processes for the CDBG-funded homeowner and small rental programs were complex, which made it difficult for some applicants to complete applications correctly. A 2008 report on Louisiana’s CDBG-funded post-disaster housing programs stated that both the homeowners and applicants for the small rental program faced rule changes, which caused confusion for applicants and likely contributed to delays. Homeowners and small rental property owners affected by Hurricanes Katrina and Rita have also experienced challenges in their rebuilding efforts as a result of local market conditions, particularly high insurance premiums and construction costs. For example, according to a report by the Louisiana Housing Finance Agency, premiums for homeowners insurance escalated to four times their pre-Katrina level for some areas of Louisiana that were severely impacted by the storm. Such increases have likely made it difficult for some homeowners to rebuild. Researchers have stated that rising construction costs in both Louisiana and Mississippi pose challenges for redevelopment. The current national economic climate has also made it more difficult for both homeowners and developers of rental housing to secure adequate financing. Due to the tightening credit markets and storm-related credit issues, many homeowners in the area affected by the 2005 storms are finding it extremely difficult to access credit to cover repair costs. According to an organization that has researched recovery efforts in New Orleans, tightened credit markets are one of the key reasons that many homeowners in New Orleans have been unable to fully recover their homes. Also, as we have previously reported, as investors’ interest in tax credit projects declines, developers must seek additional funding sources to make up for the equity shortfall. According to state administrators of the GO Zone LIHTC programs in Louisiana and Mississippi, given the financial crisis, it is increasingly difficult to find investors for tax credit projects, and it will be a challenge to meet the December 2010 deadline for units to be placed in service. Both homeowners and renters have been negatively affected by financing gaps, delays in the availability of federal resources, and adverse economic conditions. Cumulatively, these and other issues have contributed to slow progress in repairing and replacing housing in the Gulf Coast. This slow progress is putting additional pressure on the already strained housing market. Slow redevelopment also contributes to neighborhood blight. According to a recent Brookings report, New Orleans had 65,888 vacant or blighted residential addresses as of March 2009, and nearly 59,000 of these addresses were blighted or empty lots. This was a slight decrease from 69,727 as of September 2008. Another negative impact of these challenges is a lack of affordable housing, especially for rental households. HUD’s fair market rent for a two-bedroom unit in the New Orleans-Metairie-Kenner metropolitan area increased from $676 to $1,030 or about 52 percent, between fiscal years 2005 and 2009. As a result of such rent increases, low- and moderate- income renters who could afford housing in New Orleans before the storm may no longer be able to find affordable housing. In Mississippi’s coastal counties, both for-sale and rental housing are estimated to be less affordable than before Hurricane Katrina, according to a 2009 study. HUD’s fair market rent for a two-bedroom unit in the Gulfport-Biloxi metropolitan area increased from $592 to $844, or about 43 percent, while the general price level for rent nationwide increased about 15 percent over the same time period. While it is projected that there will be more than a full recovery of subsidized rental housing units in Mississippi’s coastal counties, it is also estimated that there will be an increased need for additional rental assistance in these counties, such as housing choice vouchers, to make available units more affordable. Finally, some of the residents who were displaced from their communities have not returned, especially lower-income renters. As previously noted, while the New Orleans metropolitan area is home to a population that is equal to about 90 percent of the pre-Katrina population, school enrollment across the metropolitan area has slowed, suggesting that the characteristics of the population that has returned are different than those of the pre-storm population. According to the Urban Institute, the slow recovery of housing—especially moderately priced rental housing—in the greater New Orleans region prevents many families from returning. We reviewed prior GAO work and reports by agencies and research groups and identified four sets of options that may help address the challenges faced by homeowners and rental property owners in using program funds. First, to address the challenges related to the gaps in funding available to repair and replace damaged housing—specifically rental housing—federal funds for permanent post-disaster housing could be allocated between homeowners and rental property owners based on need, and take into account all of the programs and resources that are available. For example, Congress could provide more direction to states on how to allocate appropriations for the CDBG program. Directing how grantees allocate CDBG funds would entail trade-offs. For example, states may have less discretion in designing post-disaster housing programs and deciding how much funding is to be made available for rebuilding units occupied by homeowners and renters and for economic development activities. While such trade-offs exist, Congress has previously provided specific direction to recipients of CDBG funds. For example, Congress provided direction to recipients of funds under the Neighborhood Stabilization Program, which is based on the CDBG program and focuses on redevelopment of abandoned and foreclosed homes. Congress directed recipients of these funds to focus on areas of greatest need, including areas with high concentrations of foreclosed homes. Without specific direction on how to better target disaster-related CDBG funds for the redevelopment of homeowner and rental units after future disasters, states’ allocation of assistance to homeowners and rental property owners may again result in significant differences in the level of assistance provided. To address the financing gaps faced by homeowners, larger amounts of assistance could be made available to households. Several of the reports we reviewed indicated that additional funds should be provided to homeowners that did not receive sufficient CDBG funds to recover from the storms. For example, one private research organization recommends that the federal government allocate funds to close the gaps created by the formula for the Road Home Homeowner Program, the CDBG-funded homeowner program in Louisiana. In addition, this organization has stated that grants should have been calculated based upon either the higher of the assessed value or the damage estimate repair costs rather than the pre-storm estimated value of homes. For future disasters, basing grant amounts on repair costs rather than property values may be helpful to households in closing financing gaps—especially those households that live in distressed or less desirable communities where property values are less than the cost of repairs and replacement. However, in providing larger amounts of assistance to individual households, fewer households may be served with the same amount of overall program funding. To help reduce delays in delivering funds for the recovery of housing, guidance could be developed, eligible uses of funds could be clarified, states could design programs to provide funds up front, and application processes could be simplified. We previously recommended that HUD issue guidance for CDBG disaster-assistance programs that provides information on acceptable program designs and discusses program elements that trigger federal environmental reviews. We also recommended that HUD coordinate with FEMA to clarify options and limits of using CDBG funds with other disaster-related federal funds. Such guidance could help states in the future by clearly articulating applicable legal and financial requirements, as well as the types of activities that may trigger federal environmental review requirements. State officials in Louisiana have acknowledged that delays in making CDBG funds available to small rental property owners could have been minimized by designing a program that provided funds up front, rather than after property owners independently financed repairs. Such program designs for the use of CDBG funds after future disasters could potentially reduce delays. Furthermore, organizations that represented disaster victims have stated that complex application processes contributed to delays in Louisiana, suggesting that for future disasters, delays could be minimized if application processes are clear and transparent. Decision makers could also reconsider what federal programs should deliver assistance for permanent housing. A report by the Office of the Federal Coordinator indicated that the CDBG program should be re- evaluated to determine whether it should be the primary funding vehicle for replacing housing stock following a disaster. The Office of the Federal Coordinator also indicated that experts should convene to discuss how challenges associated with the current federal funding vehicles for post- disaster recovery and rebuilding—such as CDBG, Public Assistance, Individual Assistance, and HMGP—could be addressed and to explore ideas for potential new vehicles and/or frameworks. According to HUD officials, the CDBG program is not one of the federal government’s official disaster recovery programs. However, to date the supplemental CDBG funds that have gone to the Gulf Coast have been the largest amount of CDBG disaster relief provided to one area in the history of the program. A previous GAO report stated that HUD should continue to work with the administration to determine what role the CDBG program should have in disaster recovery. HUD officials have also stated that a permanently authorized disaster CDBG program may be more effective in delivering post-disaster housing assistance. HUD officials have stated that if a disaster-specific CDBG program is given a permanent role, they would issue permanent regulations and program guidance. In addition, there would be less need after each disaster for HUD to consider the use of waivers and write federal notices to guide the use of CDBG funds. The response to Hurricanes Katrina and Rita highlights the need to re- evaluate how housing assistance for homeowners and rental property owners is delivered after a disaster. If the federal government adopts a similar approach in the future, it will likely encounter many of the same challenges, including gaps in available financing for permanent housing and delays in the availability of program funds to victims. The state programs, funded under the CDBG program, the largest single source of federal assistance for permanent housing after the 2005 Gulf Coast hurricanes, focused much of their resources on addressing the needs of homeowners. Our analysis shows that recovery funds addressed a substantially larger percentage of the rebuilding needs of homeowners compared to rental property owners and that state-designed programs have not fully accounted for the needs of renters in their decisions regarding how to allocate funds. As a result, there continues to be an acute need for affordable rental housing in the Gulf Coast area, and many displaced residents may not be able to return to their communities. States, which had broad flexibility in their use of CDBG funds, did not prioritize the repair or replacement of rental housing for several reasons, including their decisions to rely on other sources of federal funding for rental housing, such as SBA loans and GO Zone LIHTCs. However, the GO Zone LIHTC program addressed only a small part of the repair and replacement needs of rental properties and furthermore, according to program administrators, investors’ demand for tax credits in the current economy has been weak. In the event of a future disaster, the lack of specific direction to states on how to target disaster-related CDBG funds may again result in a significant difference in the amounts of assistance for the redevelopment of homeowner and rental property units. Within the programs, administrative improvements could be made. In fact, we have previously recommended that HUD issue guidance for CDBG disaster assistance that provides information on acceptable program designs and that HUD coordinate with FEMA to clarify options and limits of using CDBG funds with other disaster-related federal funds. We believe that implementing these recommendations would help minimize delays in making CDBG funds available for homeowners and rental property owners after future disasters. Furthermore, we continue to believe that HUD should work with the administration to determine what role the CDBG program should have in disaster recovery. To the extent that the CDBG program continues to be the primary vehicle used to provide post-disaster assistance for permanent housing, Congress may wish to consider providing more specific direction regarding the distribution of disaster-related CDBG assistance that states are to provide for homeowners and renters. If Congress wishes to change the proportion of assistance directed to homeowners and rental property owners in future recovery efforts, Congress could, for example, require states to demonstrate to HUD that they are adequately addressing the needs of both homeowners and renters with their CDBG allocation and other resources as a condition for receiving funds. Alternatively, Congress could direct HUD to develop a formula that accounts for the housing needs of both homeowners and renters. Such a formula could be used by states to determine the proportions of their disaster CDBG funds that should be used for housing, specifically rental housing. Further, the formula could also reflect the anticipated production levels of other programs that provide permanent housing assistance, such as the Low-Income Housing Tax Credit program. We provided a draft of this report to the Department of Housing and Urban Development, the Department of Homeland Security, the Department of the Treasury, and the Small Business Administration. We received technical comments from all of the agencies and incorporated them as appropriate. We also provided relevant sections of this report to state officials in Louisiana and Mississippi, and incorporated their technical comments as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of the Department of Housing and Urban Development, the Secretary of the Department of Homeland Security, the Secretary of the Treasury, and the Administrator of the Small Business Administration. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report or would like additional information, please contact me at (202) 512-8678 or sciremj@gao.gov. Major contributors to this report are acknowledged in appendix VIII. Our objectives were to (1) describe how federal disaster-related assistance for permanent housing has been provided to homeowners and rental property owners affected by the 2005 Gulf Coast hurricanes; (2) evaluate the extent to which federally funded programs responded to the needs of homeowners and rental property owners in repairing or replacing units damaged by these hurricanes; and (3) describe the challenges that homeowners and rental property owners have faced in applying for and using federal assistance, and potential options for addressing these challenges. In reviewing post-disaster assistance for permanent housing provided after Hurricanes Katrina and Rita, we focused on Louisiana and Mississippi because these two states sustained the most damage of the five states affected by these hurricanes. To identify federal programs that provided this assistance, we reviewed statutes and regulations, studies and reports by government agencies and research organizations, and agency Web sites. In addition, we interviewed officials from the Department of Housing and Urban Development (HUD), Department of Homeland Security (DHS), Department of the Treasury (Treasury), and the Small Business Administration (SBA); state and local housing agencies in Baton Rouge and New Orleans, Louisiana, and Biloxi, Gulfport, and Jackson, Mississippi; and various housing groups. We selected programs that were disaster-related and provided funds or tax incentives to encourage the development of permanent housing, which included repairing or replacing housing. We defined assistance for permanent housing as assistance for housing that is intended for long-term occupancy. Given our methodology, we did not include every program that provides post-disaster housing assistance. The programs in our review include the following: Capital Fund Emergency/Natural Disaster Funding (HUD) Community Development Block Grant Program (HUD) GO Zone Low-Income Housing Tax Credits (Treasury) GO Zone Tax-Exempt Private Activity Bonds (Treasury) Home Disaster Loan and Physical Disaster Business Loan (SBA) Individuals and Households Program: Repair or Replacement Assistance (DHS) GO Zone New Markets Tax Credits (Treasury) Public Assistance for Permanent Work (DHS) For each of the programs identified, we determined whether homeowners, rental property owners, or both were potentially eligible to benefit from the program; whether the assistance was in the form of a grant, loan, or tax incentive; the types of activities that could be funded; and the program’s purpose. We also identified whether the program was administered by a federal or state agency, the amount of funds available, and how the administering agency awarded funds. For most programs, we used the amount appropriated by Congress as the amount available. However, for programs that were funded through the Disaster Relief Fund (Individuals and Households Program: Repair or Replacement Assistance and Public Assistance for Permanent Work), we used the amounts awarded in Louisiana and Mississippi. More specifically, for the Public Assistance for Permanent Work Program, we reported the amount awarded to public housing authorities. We generally used the date that the initial application period opened, if applicable, to describe the dates that the programs we reviewed became available. For the Gulf Opportunity (GO) Zone Tax-Exempt Private Activity Bond Program, we reported the dates that states made either the first final approval or the first allocation. For Mississippi’s Public Housing Program, which was funded through the Community Development Block Grant Program (CDBG), we reported the date that HUD approved the program design. To determine the extent to which the programs we reviewed responded to the needs of homeowners and rental property owners in repairing or replacing homeowner and rental units, we compared the available data on units funded to the estimated number of units damaged for homeowner and rental units. We first obtained data on the number of homeowner and rental units with minor, major, and severe damage in Louisiana and Mississippi from the Federal Emergency Management Agency’s (FEMA) published estimates. We used this information as an indicator of housing need at the time of the disaster. We also used this information to highlight the Louisiana parishes and Mississippi counties that sustained the most damage. Based on available data for damaged homeowner and rental units, we determined that eight parishes accounted for 70 percent of the total homeowner and rental units damaged in Louisiana and that three counties accounted for 43 percent of the total homeowner and rental units damaged in Mississippi. To determine the proportion of homeowner and rental units that sustained damage in each state and in the most damaged areas, we compared the estimated number of homeowner and rental units damaged to the number of occupied homeowner and rental units statewide and in the most damaged areas from the 2000 Decennial Census. We obtained data from program administrators on the numbers of homeowner and/or rental housing units that were funded. For each program, we requested data on the number of units funded as of December 2008, although for some programs, program administrators could not provide data as of this date. Specifically, data for Mississippi’s Small Rental Assistance and Long Term Workforce Housing Programs were available as of April 2009; data for the Home Disaster Loan Program were available as of July 2009; data for the Capital Fund Emergency/Natural Disaster Funding Program for Mississippi and the Physical Disaster Business Loan Program were available as of August 2009; and data for the New Markets Tax Credits (GO Zone) Program were available for activities that were undertaken through December 2008. We do not believe that the different dates for the count of units funded under these programs will have a material impact on the results, as month-to- month unit count changes have been small for the key programs. For Louisiana’s Road Home Homeowner Program and Mississippi’s Homeowner Assistance Program, we counted awards to homeowners as homeowner units funded, although homeowners are not required to use their grant funds to repair their homes. We were not able to include data on the number of rental units funded through the Physical Disaster Business Loan Program because SBA did not have readily available information on the number of rental units funded. To determine the number and amount of Physical Disaster Business Loans for the repair of residential rental properties, we obtained data categorized by the North American Industry Classification System. We included approved loans for real estate repairs only. Due to data limitations, we may not have included all physical disaster business loans that were provided for real estate repairs to residential rental properties. While we did not include a $15 million appropriation for the repair of rental housing damaged by Hurricanes Katrina and Rita as one of the programs in our review, we included these funds as well as data on the number of rental units funded with this appropriation as a part of our analysis because it is an additional source of funding for permanent rental housing. With available data, we compared the total number of homeowner units funded to the total number of homeowner units damaged in each state. Because some funded units may have received assistance from multiple programs, we took steps to avoid double counting. We compared the addresses of homeowner units funded by Louisiana’s Road Home Homeowner Assistance Program, Mississippi’s Homeowner Assistance Program, Individuals and Households Program: Repair or Replacement Assistance, and the Physical Home Disaster Loan Program for both Louisiana and Mississippi. This comparison allowed us to determine whether a unit was funded by only one program or by multiple programs. In order to ensure that an address was counted only once, we converted the addresses of the homeowner units funded by each of the programs we reviewed into a consistent format and applied standardization rules (for example, “Street” was consistently changed to “St.”). This allowed us to look for double counting for over 99 percent of the addresses we compared. We could not compare addresses for homeowner units funded by the HMGP, Long Term Workforce Housing Program, and New Markets Tax Credits (GO Zone) Program, because unit-level data on rental property street addresses were not readily available. In addition, we summarized the amounts of assistance awarded (and loans approved) to homeowners and rental property owners through the programs we reviewed as of December 2008, unless program administrators could not provide funding data as of this date. For example, administrators of Mississippi’s Small Rental Assistance and Long Term Workforce Housing Programs did not have archival data for December 2008, and thus they provided data as of April 2009. For the New Markets Tax Credit Program (GO Zone), funding data were available for activities that were undertaken through December 2008. However, while we could determine funding amounts for homeowner units, we could not determine funding amounts for rental units for this program because rental units are funded as a part of mixed-use projects (that is, projects with both commercial and residential uses). It was not possible to determine the amounts funded only for the rental units associated with mixed-use projects. We also obtained and analyzed data from program administrators to determine the extent to which funded homeowner and rental units were complete as of summer 2009. We were unable to determine the extent to which homeowner units were complete because either program administrators were not required to collect data or the data were not readily available. For most of the programs that funded rental units, data were available on the extent to which they were complete as of June, July, and August 2009 (see app. VII). Data on the extent to which rental units funded through the New Markets Tax Credit Program (GO Zone) were complete were not readily available. To assess the reliability of agency data on the numbers and location of homeowner and rental units funded as well as the amount of assistance provided, we (1) performed electronic testing for obvious errors in accuracy and completeness; (2) reviewed related documentation, including audit reports on data verification for some programs we reviewed; and (3) worked with agency officials or contractors to identify any data problems. When we found discrepancies, such as unpopulated fields or data entry errors, we notified agency officials or contractors and worked with these officials to correct the discrepancies before conducting our analysis. We determined that the data were sufficiently reliable for the purposes of our report. To identify challenges to homeowners and rental property owners, we reviewed studies and reports about housing recovery in the areas affected by the hurricanes, interviewed the administrators of the programs we reviewed, interviewed organizations that worked with disaster victims to obtain permanent housing, and analyzed data on the timeliness of funding availability and application processing. To describe the timeliness of funding availability, we interviewed program administrators, reviewed prior GAO and other reports, and obtained documentation of the dates on which HUD accepted plans for CDBG-funded programs. Also, in examining application processing times for CDBG-funded programs, we obtained data from program administrators on each loan or grant awarded. We determined the median number of days between the date the application was received by the state and the date funds were awarded (for the homeowner programs) or the closing occurred (for the small rental programs). We excluded outliers and records with missing data to determine the median number of days elapsed between these points. Further, we summarized available information from program administrators and other reports on factors that may have contributed to processing delays. To identify potential options for mitigating the challenges we identified, we reviewed recommendations in studies and reports, suggestions from program administrators and organizations that worked with disaster victims to obtain permanent housing, and information on the congressional intent of post-disaster housing assistance, including the Stafford Act and FEMA’s Disaster Housing Strategy. We discussed these options and their potential limitations. The Louisiana Road Home Homeowner Program was designed to provide a one-time compensation grant payment, up to a maximum of $150,000, to eligible homeowners whose primary residence was damaged by the 2005 Gulf Coast hurricanes and who wished to (1) repair or rebuild their home, (2) purchase another home in Louisiana, or (3) sell their home and relocate outside of the state. After the 2005 Gulf Coast hurricanes, Congress made $13.4 billion available to Louisiana for disaster recovery. Louisiana allocated $11.5 billion of these funds to the Road Home Homeowner Program. To award assistance, the Office of Community Development (OCD) reviewed applications to determine and verify program eligibility. After an application was received and determined to be preliminarily eligible, the OCD conducted an on-site evaluation. After this evaluation, grant calculations were conducted based on the lesser of either the property’s pre-storm value or the estimated cost of damage to the property. A final determination of eligibility was also made. Other assistance received—such as insurance proceeds or assistance from the Federal Emergency Management Agency (FEMA) or the Small Business Administration (SBA)—was deducted from the final grant amount awarded. The OCD also offered an Additional Compensation Grant up to a maximum of $50,000 to eligible homeowners who had a household income of 80 percent of the parish median income or less. The Additional Compensation Grant was intended to assist with any gap between the estimated cost of damage and the amount(s) the homeowner received from the Road Home compensation grant and other assistance. The Mississippi Homeowner Assistance Program was designed to provide a one-time grant payment, up to a maximum of $150,000, to eligible homeowners who lived outside of the flood plain and suffered flood damage to their primary residence as a result of Hurricane Katrina. After the 2005 Gulf Coast hurricanes, Congress made $5.5 billion available to Mississippi for disaster recovery. Mississippi allocated $1.96 billion for this program. To award assistance, the Mississippi Development Authority accepted applications during its open application period to determine program eligibility. Once eligibility was established, grant calculations were conducted based on the largest of the following values: (1) the pre- Katrina insured value adjusted by an inflation factor of 35 percent; (2) the damage amount estimated by SBA, not to exceed 135 percent of the insurable value; or (3) the Mississippi Development Authority damage assessment cost to repair. Once the homeowner was determined to be eligible, funds were made available to the homeowner through a closing process using a mortgage lender or escrow or closing agent, or potentially by electronic funds transfer. In exchange for the grant payment, a qualifying homeowner had to agree to a covenant on their property that established building code, flood insurance, and elevation requirements for them or any future owner of the land. The Mississippi Development Authority later expanded the Homeowner Assistance Program applicant pool by implementing a second phase (Phase II). Phase II offered up to a maximum of $100,000 in grant assistance to homeowners who resided inside or outside of the flood plain and who had a household income at or below 120 percent of the area median income. The Louisiana Road Home Small Rental Property Program was designed to provide gap financing to small rental property owners in the form of forgivable loans for the repair of rental units. The restored units must be offered at affordable rents to income eligible renters. After the 2005 Gulf Coast hurricanes, $751 million was made available through the small rental program. To award assistance, the Office of Community Development (OCD) accepted and reviewed applications during two rounds of funding. OCD verified basic eligibility information and then scored and ranked eligible applications. OCD then conditionally awarded loan assistance to applicants with the highest scores. Applicants who ranked below the cut- off point could apply for a later round of funding. After an applicant was conditionally awarded assistance, OCD completed verification of eligibility and issued a loan commitment letter to the applicant. OCD disbursed the award at closing, after the units were repaired and income eligible tenants were identified. Only owner occupants of three- and four-unit properties who received compensation for their home were required to deduct other benefits—including insurance payments, assistance from the Federal Emergency Management Agency (FEMA), assistance or funds from the Small Business Administration (SBA)—from their award. In December 2008, OCD announced an additional option for program participants, which provided up-front financing. The option was created to increase the production of rental housing with Community Development Block Grant (CDBG) funds and to accelerate the distribution of funds to owners. In exchange for up-front financing, owners must provide affordable housing once the property is repaired. According to OCD, program participants were sent letters informing them of this option in 2009. Mississippi’s Small Rental Assistance Program was designed to provide forgivable loans to small rental property owners in Hancock, Harrison, Jackson, and Pearl River Counties for the repair of rental units as an incentive to provide affordable rental housing to income-eligible renters. After the 2005 Gulf Coast hurricanes, about $263 million was made available through the small rental program. To award assistance, the Mississippi Development Authority (MDA) accepted and reviewed applications during two rounds of funding. MDA offered four types of loan assistance: (1) rental income subsidy assistance, (2) repair or reconstruction reimbursement for Katrina-damaged property, (3) reconstruction or conversion reimbursement for non-Katrina damaged property, and (4) new construction reimbursement. To apply for loan assistance, applicants were required to complete an application, choosing one of the four types of assistance. Once applicant eligibility was determined, MDA contacted the applicant to schedule the closing. Upon closing, applicants were given 24 months to complete all work on the structure and obtain a certificate of occupancy. The loan was awarded in two installments: half when the property owner provided a building permit, and the remainder when the owner provided a certificate of occupancy. The Gulf Opportunity Zone (GO Zone) Act of 2005 included tax incentives to assist recovery and economic revitalization for individuals and businesses in designated areas in several states, including Louisiana and Mississippi, following Hurricanes Katrina and Rita in 2005. The tax incentives included in this review are extensions of existing federal tax incentives, including low-income housing tax credits, tax-exempt private activity bonds, and new markets tax credits. The GO Zone Low-Income Housing Tax Credit (LIHTC) program was designed to provide tax incentives to encourage the development of affordable rental housing between 2006 and 2008 in the areas affected by the 2005 Gulf Coast hurricanes. A $170 million allocation was made available to Louisiana and $106 million was made available to Mississippi to fund the development of affordable rental housing. To award the tax incentives, the state housing finance agencies in Louisiana and Mississippi used their existing procedures; they announced the availability of the credits through qualified allocation plans, processed applications, and competitively awarded credits in multiple funding rounds. During some funding rounds, each state gave priority to projects proposed in the most damaged counties. Recipients of credits use them or sell them through an investment vehicle to investors to obtain equity for the development of rental housing. Investors receive a direct reduction in their tax liability. They can claim GO Zone LIHTCs for eligible projects each year for 10 years from the time the housing developments are placed in service. All of the GO Zone LIHTC- funded units must be in service before January 1, 2011. GO Zone Tax-Exempt Private Activity Bonds were made available to governmental entities after the 2005 Gulf Coast hurricanes to help finance the development of private facilities and activities, including single-family and multifamily rental housing. Louisiana received $7.8 billion in GO Zone Tax-Exempt Private Activity Bond allocation authority, and Mississippi received $4.9 billion. In accordance with the GO Zone Act of 2005, in Louisiana, the bond commission had the final authority to award GO Zone bonds, and in Mississippi, which did not have a bond commission, the final authority rested with the Governor. To award allocation authority, the states of Louisiana and Mississippi accepted and reviewed applications and allocated bond authority on a first-come, first-served basis. Governmental entities issue the bonds, which are repaid by the borrowers’ payments on their loans. The GO Zone bonds allowed states to exceed their annual state volume caps and could be used for a broader range of facilities than tax-exempt private activity bonds, which are subject to annual state volume caps. GO Zone Tax-Exempt Private Activity Bonds must be used between 2006 and 2010. Data on the number of homeowner units funded through GO Zone Tax- Exempt Private Activity Bonds were not readily available from either state program administrator. In Louisiana, 216 rental units were funded as of December 2008, and GO Zone Tax-Exempt Private Activity Bonds were not used to fund rental housing in Mississippi. The New Markets Tax Credit program is designed to provide a tax incentive to investors (including financial institutions, individuals, and corporations) to invest in Community Development Entities, which then reinvest the funds in qualified low-income community investments. Such investments include, but are not limited to, residential projects. The GO Zone Act of 2005 authorized $1 billion of special allocation authority to be used for the recovery and redevelopment of the GO Zone. To award assistance, the Community Development Financial Institutions Fund in the Department of the Treasury evaluated applications for credit allocations from Community Development Entities proposing activities in the GO Zone. The Community Development Financial Institutions Fund also allocated tax credit authority in 2006 and 2007. According to our analysis of Community Development Financial Institutions Fund data, 217 homeowner units and 493 rental units had been funded with GO Zone New Markets Tax Credit allocation authority through 2008. While about $28 million in authority was awarded for homeowner units, it is not possible to determine the amount of authority awarded for rental units that are funded as a part of mixed-use projects. The Small Business Administration (SBA) provides federal long-range recovery funding after a presidentially declared or SBA-declared disaster through the Disaster Loan Program. After the 2005 hurricanes, homeowners could apply to this program for a Home Disaster Loan up to a maximum of $200,000 for real estate repairs. Business owners (including rental property owners) could apply for a Physical Disaster Business Loan up to a maximum of $2 million for real estate and personal property. Homeowners and business owners could use loan funds to repair or replace damaged or destroyed real estate not covered by insurance or other assistance. Business owners could also use loan funds for inventory, supplies, machinery and equipment, and other business assets owned by the business and not covered by insurance or other assistance. After the hurricanes, SBA sent a loan application package to homeowners and business owners that first registered with the Federal Emergency Management Agency (FEMA) and met initial income eligibility criteria. To make a disaster loan, SBA reviewed the loan application package, and applicants were approved or denied based on their ability to repay the loan and other criteria, such as credit history. After the application was approved, SBA began the loan process, which included loss verification, underwriting (which includes the decision to loan funds), approval, closing for loan authorization and agreement, and initial disbursement. SBA loan applications became available after the hurricanes occurred and incremental application deadlines were determined for each state. For Louisiana and Mississippi, the application period for Hurricane Katrina was October 2005 through April 2006. For Louisiana, the application period for Hurricane Rita was November 2005 through April 2006. The Capital Fund Emergency/Natural Disaster Funding Program is administered by the Department of Housing and Urban Development (HUD) and is designed to provide grants to public housing authorities (PHA) for the repair or replacement of public housing that is damaged or destroyed by emergencies or natural phenomena, such as hurricanes, flooding, or earthquakes. Congress appropriates funds to this program each year. For 2005, a total of $29.7 million was appropriated for the Capital Fund reserve. To award assistance after the 2005 Gulf Coast hurricanes, HUD reviewed applications from PHAs and awarded grants on a first-come, first-served basis. Grant funds pay for a PHA’s needs that are in excess of its insurance coverage or other federal assistance, such as assistance provided by the Federal Emergency Management Agency (FEMA). The Hazard Mitigation Grant Program (HMGP) is administered by FEMA. The HMGP provides grants to states, local governments, and Indian tribes for long-term hazard mitigation projects following a major disaster declaration. The program is intended to reduce the loss of life and property in future disasters by funding mitigation measures. HMGP funding is calculated based on the percentage of the funds spent on Public and Individual Assistance for each presidentially declared disaster. Generally about 12 months after a disaster, FEMA determines the amount of HMGP funds to be allocated to each affected state. To be eligible for HMGP assistance, a project must provide a long-term solution to a specific risk, such as elevating a flood-prone property or acquiring a flood-prone property for demolition or relocation. During the recovery phase of a disaster, local jurisdictions select projects that could reduce property damage from future disasters and submit grant applications to the state, and the state submits applications to FEMA. FEMA conducts a final eligibility review to ensure compliance with federal regulations. FEMA generally requires states to submit their project applications within 12 months of the date the disaster was declared. However, after the 2005 Gulf Coast hurricanes, FEMA provided a number of deadline extensions for Louisiana and Mississippi, with the final deadline for Mississippi set at June 30, 2009, and at October 30, 2009, for Louisiana. The Individuals and Households Program (IHP) makes grants and direct services available to disaster victims. Several types of assistance are available through IHP, including Repair Assistance and Replacement Assistance. Repair and Replacement Assistance was available for the repair of homes to safe and sanitary living conditions, and to help replace disaster-damaged homes. Following Hurricanes Katrina and Rita, the maximum allowance for repair assistance was $5,200 and the maximum allowance for replacement assistance was $10,500. FEMA reviews applications and conducts property inspections to determine eligibility and grant amounts. Generally, FEMA accepts applications for 60 days from the disaster date, but the application period was extended to April 2006 after Hurricanes Katrina and Rita. To be eligible for assistance, the applicant must be a U.S. citizen, a non-citizen national, or a qualified alien, and must have owned a home in a presidentially declared disaster area. The Long Term Workforce Housing Program was designed to provide grants and loans to local units of government, nonprofit organizations, and for-profit organizations to develop permanent affordable housing for homeowners and renters in Hancock, Harrison, Jackson, and Pearl River counties. For this program, housing must benefit those who earn 120 percent of area median income or less. The state of Mississippi made $350 million of its disaster Community Development Block Grant funds available for this program. To award the assistance, the Mississippi Development Authority reviewed proposals from applicants, evaluated them based on specific selection criteria, and awarded funds to the highest scoring applicants. The selected projects were projected to produce approximately 5,850 affordable homeowner and rental units. FEMA’s Public Assistance for Permanent Work Program can be used to provide state and local governments, including PHAs, with grants to restore damaged facilities, through repair or restoration, to their pre- disaster condition. To award assistance, FEMA reviews requests for assistance and awards grants to states. Grant amounts are generally determined by evaluating repair costs and reducing grant amounts by the amount of funding awarded or anticipated to be awarded by other sources (such as assistance from other federal agencies or insurance settlements). States are then responsible for notifying applicants that funds are available and for disbursing those funds, generally on a reimbursement basis. While Public Assistance generally provides 75 percent of repair costs, the cost share for projects related to Hurricanes Katrina and Rita has been adjusted to provide 100 percent of federal funding. PHAs that did not qualify for assistance for permanent restoration costs from HUD under the Housing Act of 1937 were able to apply directly to FEMA for permanent restoration work. Such work could include repairs to PHA-owned rental units. Data on the number of units funded and completed with Public Assistance Funds for Permanent Work were not available from FEMA. According to the Housing Authority of New Orleans, which received the largest obligation of funds, no units had been funded or completed as of August 2009. According to the Region VIII Housing Authority in Mississippi, the only PHA in Mississippi that was awarded funds, 24 units were funded and completed in 2006. The Mississippi Public Housing Program was designed to provide grants to PHAs for the repair or replacement of public housing that was damaged by Hurricane Katrina. The state made $105 million of its disaster Community Development Block Grant funds available for this program. To award assistance, the Mississippi Development Authority reviewed applications from PHAs and determined award amounts based upon documentation of damage and funding the PHA received or expected to receive from insurance or from the Capital Fund for Emergency/Natural Disaster Funding Program. According to the Action Plan for this program, funds were to be made available to eligible PHAs when construction commenced, and would be paid on a “draw down” basis as the obligation to pay occurred. According to the Mississippi Development Authority, $48 million had been awarded to five PHAs for the funding of 1,594 public housing units as of December 2008. As of August 2009, 1,210 public housing units were complete and in service. Apr. 2009 Aug. 2009 Apr. 2009 Aug. 2009 and complete as of August 2009. This table includes units funded by a $15 million appropriation for the redevelopment of permanent housing damaged by Hurricane Katrina. HUD awarded these funds to the Housing Authority of New Orleans. Major contributors to this report were Daniel Garcia-Diaz, Assistant Director; Vanessa Dillard; Shamiah Kerney; and Lisa Moore. Johnnie Barnes, Cindy Gilbert, Thomas Gilbert, John McGrail, John Mingus, and Jennifer Schwartz also made key contributions to this report. | In response to the 2005 Gulf Coast hurricanes, Congress provided about $130 billion in disaster recovery assistance, including assistance for permanent housing. Congress has expressed an interest in how this assistance has been allocated to homeowners and rental property owners, particularly for state-administered programs. GAO's objectives were to review (1) how federal disaster-related assistance for permanent housing has been provided to homeowners and rental property owners, (2) the extent to which federally funded programs have responded to the needs of homeowners and rental property owners, and (3) the challenges that homeowners and rental property owners have faced in applying for and using federal assistance, and potential options for addressing these challenges. To address these objectives, GAO analyzed documentation for key programs and program data, and interviewed federal, state, and local officials regarding the challenges associated with these programs. Federal post-disaster assistance for permanent housing was made available to homeowners and rental property owners following the 2005 Gulf Coast hurricanes through grants, loans, and tax incentives. State agencies were largely responsible for administering the programs that delivered most of the assistance, including the Community Development Block Grant (CDBG) program, the most widely used source of federal funds. Congress provided states with broad flexibility in their use of CDBG funds. Federal programs GAO reviewed addressed the repair and replacement needs of more homeowner units than rental units. In both Louisiana and Mississippi, more homeowner units were damaged than rental units, but the proportional damage to the rental stock was generally greater. Programs GAO reviewed provided about $13 billion in assistance for the repair and replacement of about 303,000 homeowner units, and about $1.8 billion for over 43,000 rental units. When the estimated number of assisted units is compared to the estimated number of damaged units, 62 percent of damaged homeowner units and 18 percent of damaged rental units were assisted. The difference in the level of assistance for homeowner and rental units was largely due to states' decisions to award the majority of their CDBG funds to programs for homeowners. When attempting to use the programs GAO reviewed, both homeowners and rental property owners encountered delays in funding availability and other challenges, which have likely contributed to the slow pace of recovery in some areas and fewer affordable units for renters. GAO and others have previously recommended options to minimize these challenges. However, without specific direction on how to better target disaster-related CDBG funds for the redevelopment of homeowner and rental units after future disasters, states' allocation of assistance to homeowners and rental property owners may again result in significant differences in the level of assistance provided. |
With the 21st century challenges we are facing, it is more vital than ever to maximize the performance of federal agencies in achieving their long-term goals. The federal government must address and adapt to major trends in our country and around the world. At the same time, our nation faces serious long-term fiscal challenges. Increased pressure also comes from world events: both from the recognition that we cannot consider ourselves “safe” between two oceans—which has increased demands for spending on homeland security—and from the U.S. role in combating terrorism in an increasingly interdependent world. To be able to assess federal agency performance and hold agency managers accountable for achieving their long-term goals, we need to know what the level of performance is. GPRA planning and reporting requirements can provide this essential information. Our country’s transition into the 21st century is characterized by a number of key trends, including the national and global response to terrorism and other threats to our personal and national security; the increasing interdependence of enterprises, economies, markets, civil societies, and national governments, commonly referred to as globalization; the shift to market-oriented, knowledge-based economies; an aging and more diverse U.S. population; rapid advances in science and technology and the opportunities and challenges created by these changes; challenges and opportunities to maintain and improve the quality of life for the nation, communities, families, and individuals; and the changing and increasingly diverse nature of governance structures and tools. As the nation and government policymakers grapple with the challenges presented by these evolving trends, they do so in the context of rapidly building fiscal pressures. GAO’s long-range budget simulations show that this nation faces a large and growing structural deficit due primarily to known demographic trends and rising health care costs. The fiscal pressures created by the retirement of the baby boom generation and rising health costs threaten to overwhelm the nation’s fiscal future. As figure 1 shows, by 2040, absent reform or other major tax or spending policy changes, projected federal revenues will likely be insufficient to pay much beyond interest on publicly held debt. Further, our recent shift from surpluses to deficits means the nation is moving into the future in a weaker fiscal position. The United States has had a long-range budget deficit problem for a number of years, even during recent years when we had significant annual budget surpluses. Unfortunately, the days of surpluses are gone, and our current and projected budget situation has worsened significantly. The bottom line is that our projected budget deficits are not manageable without significant changes in “status quo” programs, policies, processes, and operations. Doing nothing is simply not an option, nor will marginal efforts be enough. Difficult choices will have to be made. Clearly, the federal government must start to exercise more fiscal discipline on both the spending side and the tax side. While many spending increases and tax cuts may be popular, they may not all be prudent. However, there is not a single solution to the problems we face; a number of solutions are needed. It will take the combined efforts of many parties over an extended period for these efforts to succeed. GPRA, which was enacted 10 years ago, provides a foundation for examining agency missions, performance goals and objectives, and results. While this building effort is far from complete, it has helped create a governmentwide focus on results by establishing a statutory framework for performance management and accountability. The necessary infrastructure has been built to generate meaningful performance information. For example, through the strategic planning requirement, GPRA has required federal agencies to consult with the Congress and key stakeholders to reassess their missions and long-term goals as well as the strategies and resources they will need to achieve their goals. It also has required agencies to articulate goals for the upcoming fiscal year that are aligned with their long-term strategic goals. Finally, agencies are required to report annually on their progress in achieving their annual performance goals. Therefore, information is available about current missions, goals, and results. Our prior assessments of the quality of agency planning and reporting documents indicate that significant progress has been made in meeting the basic requirements of GPRA. For example, we found improvements in agencies’ strategic plans, such as clearer mission statements and long-term goals. Also, after we found many weaknesses in agencies’ first annual performance plans, subsequent plans showed improvements, such as the frequent use of results-oriented goals and quantifiable measures to address performance. Finally, a high and increasing percentage of federal managers we surveyed in 1997 and 2000 reported that there were performance measures for the programs with which they were involved. Those managers who reported having performance measures also increasingly reported having outcome, output, and efficiency measures. We will be updating our analysis of the quality of agency planning and reporting efforts and our survey of federal managers as part of our 10-year retrospective review of GPRA. The report will be available next month. As we move further into the 21st century, it becomes increasingly important for the Congress, OMB, and other executive agencies to consider how the federal government can maximize performance and results, given the significant fiscal limitations I have described. GPRA can help address this question by linking the results that the federal government seeks to achieve to the program approaches and resources that are necessary to achieve those results. The performance information produced by GPRA’s planning and reporting infrastructure can help build a government that is better equipped to deliver economical, efficient, and effective programs that can help address the challenges facing the federal government. Clearly, federal agencies have made strides in laying the foundation of planning and performance information that will be needed to address our 21st century challenges. We are now moving to a more difficult but more important phase of GPRA implementation, that is, using results-oriented performance information as a routine part of agencies’ day-to-day management, and congressional and executive branch decision making. To achieve a greater focus on results and maximize performance, federal agencies will need to make greater use of GPRA documents, such as strategic plans, to guide how they do business every day—both internally, in terms of guiding individual employee efforts, as well as externally, in terms of coordinating activities and interacting with key stakeholders. However, much work remains before this framework is effectively implemented across the government, including (1) transforming agencies’ organizational cultures to improve decision making and strengthen performance and accountability, (2) developing meaningful, outcome- oriented performance goals and measures and collecting useful performance data, (3) addressing widespread mission fragmentation and overlap, and (4) using performance information in allocating resources. The cornerstone of federal efforts to successfully meet current and emerging public demands is to adopt a results orientation, that is, to develop a clear sense of the results an agency wants to achieve as opposed to the products and services (outputs) an agency produces and the processes used to produce them. Adopting a results orientation requires transforming organizational cultures to improve decision making, maximize performance, and ensure accountability—it entails new ways of thinking and doing business. This transformation is not an easy one and requires investments of time and resources as well as sustained leadership commitment and attention. Our prior work on GPRA implementation has found that many agencies face significant challenges in establishing an agency-wide results- orientation. Federal managers we surveyed have reported that agency leaders do not consistently demonstrate a strong commitment to achieving results. Furthermore, these managers believed that agencies do not always positively recognize employees for helping the agency accomplish its strategic goals. In addition, we have reported that high-performing organizations seek to shift the focus of management and accountability from activities and processes to contributions and achieving results. However, although many federal managers in our survey reported that they were held accountable for the results of their programs, only a few reported that they had the decision making authority they needed to help the agencies accomplish their strategic goals. Finally, although managers we surveyed increasingly reported having results-oriented performance measures for their programs, the extent to which these managers reported using performance information for any of the key management activities we asked about mostly declined from earlier survey levels. To be positioned to address the array of challenges we face, federal agencies will need to transform their organizational cultures so that they are more results-oriented, customer-focused, and collaborative. Leading public organizations here in the United States and abroad have found that strategic human capital management must be the centerpiece of any serious change management initiative and efforts to transform the cultures of government agencies. Performance management systems are integral to strategic human capital management. Such systems can be key tools to maximizing performance by aligning institutional performance measures with individual performance and creating a “line of sight” between individual and organizational goals. Leading organizations use their performance management systems as a key tool for aligning institutional, unit, and employee performance; achieving results; accelerating change; managing the organization day to day; and facilitating communication throughout the year so that discussions about individual and organizational performance are integrated and ongoing. Another key challenge to achieving a governmentwide focus on results is that of developing meaningful, outcome-oriented performance goals and collecting performance data that can be used to assess results. Performance measurement under GPRA is the ongoing monitoring and reporting of program accomplishments, particularly progress toward preestablished goals. It tends to focus on regularly collected data on the level and type of program activities, the direct products and services delivered by the program, and the results of those activities. For programs that have readily observable results or outcomes, performance measurement may provide sufficient information to demonstrate program results. In some programs, however, outcomes are not quickly achieved or readily observed, or their relationship to the program is uncertain. In such cases, more in-depth program evaluations may be needed, in addition to performance measurement, to examine the extent to which a program is achieving its objectives. However, our work has raised concerns about the capacity of federal agencies to produce evaluations of program effectiveness. Few of the agencies we reviewed deployed the rigorous research methods required to attribute changes underlying outcomes to program activities. Yet we have also seen how some agencies have profitably drawn on systematic program evaluations to improve their measurement of program performance or understanding of performance and how it might be improved. For example, to improve performance measurement, two agencies we reviewed used the findings of effectiveness evaluations to provide data on program results that were otherwise unavailable. Our work has also identified substantial, long-standing limitations in agencies’ abilities to produce credible data and identify performance improvement opportunities that will not be quickly or easily resolved. For example, policy decisions made when designing federal programs, particularly intergovernmental programs, may make it difficult to collect timely and consistent national data. In administering programs that are the joint responsibility of state and local governments, the Congress and the executive branch continually balance the competing objectives of collecting uniform program information to assess performance with giving states and localities the flexibility needed to effectively implement intergovernmental programs. While progress has been made by federal agencies in laying a foundation of performance information for existing program activities and structures, the federal government has not realized the full potential of GPRA to address program areas that cut across federal agency boundaries. The government has made strides in this area in recent years. For example, in reviewing agencies’ crosscutting plans in the area of wildland fire management, we found that both the Department of the Interior and the Forest Service, within the Department of Agriculture, discussed their joint participation in developing plans and strategies to address the growing threats to our forests and nearby communities from catastrophic wild fires. The Congress could make greater use of agency performance information to identify potential fragmentation, overlap, and duplication among federal programs. Virtually all of the results that the federal government strives to achieve require the concerted and coordinated efforts of two or more agencies. Our work has shown that mission fragmentation and program overlap are widespread, and that crosscutting federal program efforts are not well coordinated. For example, we have reported that seven federal agencies administer 16 programs that serve the homeless population, with the Department of Housing and Urban Development responsible for most of the funds. We have also frequently commented on the fragmented nature of our food safety system, with responsibility split between the Food Safety and Inspection Service within the Department of Agriculture, the Food and Drug Administration within the Department of Health and Human Services, and 10 other federal agencies. Crosscutting program areas that are not effectively coordinated waste scarce funds, confuse and frustrate program customers, and undercut the overall effectiveness of the federal effort. GPRA offers a structured and governmentwide means for rationalizing these crosscutting efforts. The strategic, annual, and governmentwide performance planning processes under GPRA provide opportunities for each agency to ensure that its goals for crosscutting programs complement those of other agencies; program strategies are mutually reinforcing; and, as appropriate, common performance measures are used. If GPRA is effectively implemented, the governmentwide performance plan and the agencies’ annual performance plans and reports should provide the Congress with information on agencies and programs addressing similar results. Once these programs are identified, the Congress can consider the associated policy, management, and performance implications of crosscutting programs as part of its oversight of the executive branch. A key objective of GPRA is to help the Congress, OMB, and other executive agencies develop a clearer understanding of what is being achieved in relation to what is being spent. Linking planned performance with budget requests and financial reports is an essential step in building a culture of performance management. Such an alignment infuses performance concerns into budgetary deliberations, prompting agencies to reassess their performance goals and strategies and to more clearly understand the cost of performance. For the fiscal year 2005 budget process, OMB called for agencies to prepare a performance budget that can be used for the annual performance plan required by GPRA. Credible outcome-based performance information is absolutely critical to fostering the kind of debate that is needed. Linking performance information to budgeting carries great potential to improve the budget debate by changing the kinds of questions and information available to decision makers. However, performance information will not provide mechanistic answers for budget decisions, nor can performance data eliminate the need for considered judgment and political choice. If budget decisions are to be based in part on performance data, the integrity, credibility, and quality of these data and related analyses become more important. Moreover, in seeking to link resources to results, it will be necessary to improve the government’s capacity to account for and measure the total costs of federal programs and activities. GPRA expanded the supply of performance information generated by federal agencies. OMB’s Program Assessment Rating Tool (PART) proposes to build on GPRA by improving the demand for results-oriented information in the budget. It has the potential to promote a more explicit discussion and debate between OMB, the agencies, and the Congress about the performance of selected programs. Presumably, PART will identify expectation gaps, questions, and areas where further inquiry and analysis would be most useful. Fifty years of past efforts to link resources with results has shown that any successful effort must involve the Congress as a partner. In fact, the administration acknowledged that performance and accountability are shared responsibilities that must involve the Congress. It will only be through the continued attention of the Congress, the administration, and federal agencies that progress can be sustained and, more important, accelerated. Ultimately, the success of GPRA will be reflected in whether and how the Congress uses agency performance information in the congressional budget, appropriations, authorization, and oversight processes. As a key user of performance information, the Congress also needs to be considered a partner in shaping agency goals at the outset. More generally, effective congressional oversight can help improve federal performance by examining the program structures agencies use to deliver products and services to ensure that the best, most cost-effective mix of strategies is in place to meet agency and national goals. As part of this oversight, the Congress should consider the associated policy, management, and policy implications of crosscutting programs. Information produced in response to GPRA can be useful for congressional oversight as well as program management. As I have testified before, there are several ways that GPRA could be enhanced to provide better governmentwide information. First, there are many users of agencies’ performance information—the Congress, the public, and the agency itself. One size does not fit all. To improve the prospect that agency performance information will be useful to and used by these different users, agencies need to consider the different information needs and how to best tailor their performance information to meet those needs. This might entail the preparation of simplified and streamlined plans and reports for the Congress and other external users. Second, we have previously reported that GPRA could provide a tool to reexamine federal government roles and structures governmentwide. GPRA requires the President to include in his annual budget submission a federal government performance plan. The Congress intended that this plan provide a “single cohesive picture of the annual performance goals for the fiscal year.” The governmentwide performance plan could help the Congress and the executive branch address critical federal performance and management issues, including redundancy and other inefficiencies in how we do business. It could also provide a framework for any restructuring efforts. Unfortunately, this provision has not been fully implemented. If the governmentwide performance plan were fully implemented, it could also provide a framework for congressional oversight. For example, in recent years, OMB has begun to develop common measures for similar programs, such as job training. By focusing on broad goals and objectives, oversight could more effectively cut across organization, program, and other traditional boundaries. Such oversight might also cut across existing committee boundaries, which suggests that the Congress may benefit from using specialized mechanisms to perform oversight (i.e., joint hearings and special committees). Third, a strategic plan for the federal government, along with key national indicators to assess the government’s performance, could provide an additional tool for governmentwide reexamination of existing programs, as well as proposals for new programs. If fully developed, a governmentwide strategic plan can potentially provide a cohesive perspective on the long- term goals of the federal government and provide a much needed basis for fully integrating, rather than merely coordinating, a wide array of federal activities. Successful strategic planning requires the involvement of key stakeholders. Thus, it could serve as a mechanism for building consensus. Further, it could provide a vehicle for the President to articulate long-term goals and a road map for achieving them. In addition, a strategic plan can provide a more comprehensive framework for considering organizational changes and making resource decisions. In addition to the annual budget resolution on funds, the Congress could also have a performance resolution that specifies performance expectations. Developing a strategic plan for the federal government would be an important first step in articulating the role, goals, and objectives of the federal government. It could help provide critical horizontal and vertical linkages. Horizontally, it could integrate and foster synergies among components of the federal government as well as help to clarify the role of the federal government vis-a-vis other sectors of our society. Vertically, it could provide a framework of federal missions and goals within which individual federal agencies could align their own missions and goals that would cascade down to individual employees. It also could link to a set of key national performance indicators. A set of key national indicators could also help to assess the overall position and progress of our nation in key areas, frame strategic issues, support public choices, and enhance accountability. Developing a key national indicator system goes beyond any one sector (e.g., public, private, or nonprofit). It requires designing and executing a process whereby diverse elements of society can participate in formulating key questions and choosing indicators in a way that increases consensus over time. Such a system will take time to develop. The federal government is an important and vital player in establishing such indicators. Fourth, the traditional oversight that the Congress provides to individual organizations, programs, and activities has an important role in eliminating redundancy and inefficiencies. Important benefits can be achieved through focused oversight if the right questions are asked about performance and management. Six key questions for program oversight are as follows: Does the program make sense given 21st century trends and challenges, including whether it is appropriate as an initiative of the federal government? Are there clear performance goals, measures, and data with which to track progress? Is the program achieving its goals? If not, why not? Does the program duplicate or even work at cross purposes with related programs and tools? Is the program targeted properly? Is the program financially sustainable and are there opportunities for instituting appropriate cost-sharing and recovery mechanisms? Can the program be made more efficient through reengineering or streamlining processes or restructuring organizational roles and responsibilities? Fifth, creating the results-oriented cultures needed to make GPRA a useful management tool depends on committed, top-level leadership and sustained attention to management issues. A chief operating officer (COO) could provide the sustained management attention essential for addressing key infrastructure and stewardship issues and could facilitate the transformation process. Establishing a COO position in selected federal agencies could provide a number of benefits. A COO would be the focal point for elevating attention on management issues and transformational change, integrating various key management and transformation efforts, and instituting accountability for addressing management issues and leading transformational change. A COO would provide a single organizational position for key management functions, such as human capital, financial management, information technology, acquisition management, and performance management as well as for transformational change initiatives. To be successful, in many cases, a COO will need to be among an agency’s top leadership (e.g., deputy secretary or under secretary). However, consistent with the desire to integrate responsibilities, the creation of a senior management position needs to be considered with careful regard to existing positions and responsibilities so that it does not result in unnecessary “layering” at an agency. Consideration also should be given to providing a term appointment, such as a 5—7 year term. A term appointment would provide sustained leadership. No matter how the positions are structured, it is critical that the people appointed to these positions have proven track records in similar positions and be vested with sufficient authority to achieve results. To further clarify expectations and responsibilities, the COO should be subject to a clearly defined, results-oriented performance contract with appropriate incentives, rewards, and accountability mechanisms. For selected agencies, a COO should be subject to a Senate confirmation. In creating such a position, the Congress might consider making certain subordinate positions, such as the chief financial officer, not subject to Senate confirmation. In view of the broad trends and long-term fiscal challenges facing the nation, there is a need to consider how the Congress, OMB, and executive agencies can make better use of GPRA’s planning and accountability framework to maximize the performance of not only individual programs and agencies but also the federal government as whole in addressing these challenges. The Congress can play a vital role in increasing the demand for such performance information by monitoring agencies’ performance results, asking critical questions about goals not achieved, and considering whether adjustments are needed to maximize performance in the future. The large and growing fiscal gap means that tough, difficult choices will have to be made. Doing nothing is not an option. The Congress and the administration will need to use every tool at their disposal to address these challenges. In addressing these challenges, it will be important to set clear goals, involve all key players, and establish viable processes that will lead to positive results. Credible, timely, results-oriented performance information will be vital to this decisionmaking. Mr. Chairman, this concludes my prepared statement. We in GAO take our responsibility to assist in these crucial efforts very seriously. I would be pleased to respond to any questions that you or other Members of the Committee may have. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Congress asked GAO to discuss the Government Performance and Results Act's (GPRA) success in shifting the focus of government operations from process to results and to evaluate the extent to which agency managers have embraced GPRA as a management tool. Further, Congress was interested in any recommendations GAO may have to improve the effectiveness of GPRA. GAO is conducting a comprehensive review of the effectiveness of GPRA since its enactment, including updating the results of our federal managers survey. The results of this review will be available next month. GPRA, which was enacted in 1993, provides a foundation for examining agency missions, performance goals and objectives, and results. While this building effort is far from complete, it has helped create a government-wide focus on results by establishing a statutory framework for management and accountability. This framework can improve the performance and accountability of the executive branch and enhance executive branch and congressional decisionmaking. In view of the broad trends and long-term fiscal challenges facing the nation, there is a need to consider how the Congress, the Office of Management and Budget, and executive agencies can make better use of GPRA's planning and accountability framework to maximize the performance of not only individual programs and agencies, but also of the federal government as whole in addressing these challenges. The necessary infrastructure has been built to generate meaningful performance information. For example, through the strategic planning requirement, GPRA has required federal agencies to consult with the Congress and key stakeholders to reassess their missions and long-term goals as well as the strategies and resources they will need to achieve their goals. It also has required agencies to articulate goals for the upcoming fiscal year that are aligned with their long-term strategic goals. Finally, agencies are required to report annually on their progress in achieving their annual performance goals. Therefore, information is available about current missions, goals, and results. We are now moving to a more difficult but more important phase of GPRA implementation, that is, using results-oriented performance information as a part of agencies' day-to-day management, and congressional and executive branch decision-making. However, much work remains before this framework is effectively implemented across the government, including (1) transforming agencies' organizational cultures to improve decisionmaking and strengthen performance and accountability, (2) developing meaningful, outcome-oriented performance goals and measures and collecting useful performance data, and (3) addressing widespread mission fragmentation and overlap. Furthermore, linking planned performance with budget requests and financial reports is an essential step in building a culture of performance management. Such an alignment can help to infuse performance concerns into budgetary deliberations. However, credible outcome-based performance information is critical to foster the kind of debate that is needed. |
This section presents information on roles and responsibilities of EPA and states in the UIC class II program, information on UIC class II inspection and enforcement processes, information collected from state and EPA- managed programs, and activities to oversee state and EPA-managed programs. The UIC class II program is overseen by EPA headquarters and managed by states or EPA regions, depending on whether the state has received primacy. States can obtain primacy in one of two ways. Under section 1422 of the Safe Drinking Water Act, a state can adopt and implement a program that meets specific requirements established under EPA regulations and conduct reporting as EPA requires. Alternatively, under section 1425 of the act, a state can seek approval to manage its own program by demonstrating to EPA that the program is effective in preventing the contamination of underground sources of drinking water. Both types of program must meet four key requirements in the act: (1) they must prohibit unauthorized injections; (2) authorized injections must not endanger drinking water sources; (3) they must include inspection, monitoring, recordkeeping, and reporting requirements; and (4) they must apply their provisions to federal agencies and federal land. However, states approved by this alternative process do not need to address all of the specific requirements, such as those related to well construction and testing, established in EPA regulations. Program oversight by EPA headquarters includes issuing regulations and guidance, assessing implementation of regulations and guidance by state and EPA-managed programs, and gathering information and reporting it. EPA regions both oversee state programs that have primacy and manage programs in states that do not have primacy, and states with primacy manage their own programs. Management includes permitting wells; inspecting wells; enforcing regulations and implementing guidance; reporting information on well inventories, inspections, violations, and enforcement actions; and investigating instances of potential contamination of aquifers. EPA issued a series of guidance documents describing the program and various responsibilities of states and EPA regions. To oversee state and EPA-managed programs and to ensure that they are protecting underground sources of drinking water, EPA collects certain information and conducts certain activities, as described in several guidance documents. Specifically: Memorandum of agreement for the UIC program. Issued in 1981, this guidance directs EPA regions to enter into a memorandum of agreement (MOA) with each primacy state that includes the terms, conditions, or agreements between the state and EPA regarding the administration and enforcement of state program requirements, including state inspection, enforcement, and reporting requirements. Reporting Requirements—Underground Injection Control Program (Program Reporting). Issued in 1986, this guidance explains and clarifies the information state and EPA-managed programs are to report. Underground Injection Control Program Compliance Strategy for Primacy and Direct Implementation Jurisdictions (Strategy). Issued in 1987, this document provides guidance to state and EPA- managed programs on well inspections and enforcement of program requirements, including information that should be reported on inspections and enforcement actions. Interim Guidance for Overview of the Underground Injection Control Program (Program Oversight). Issued in 1983, this document provides guidance to EPA regions and headquarters on activities that they should take to effectively oversee state and EPA- managed programs, respectively. Guidance for Review and Approval of State Underground Injection Control Programs and Revisions to Approved State Programs. Issued in 1984 to provide guidance for EPA regions on the review and approval of changes to state program requirements, this document includes guidance for EPA regions and headquarters on how to review and approve requests to exempt aquifers and how decisions on aquifer exemptions should be documented and reported. Enhancing Coordination and Communication with States on Review and Approval of Aquifer Exemption Requests Under the Safe Drinking Water Act (Aquifer Exemption Coordination). Issued in 2014, this document provides guidance on how to improve coordination and recordkeeping on aquifer exemption decisions among states, EPA regions, and EPA headquarters. According to EPA’s 1981 MOA guidance, EPA regions should develop an MOA with each primacy state to outline areas of the applicable regulations that are relevant to the administration and enforcement of the state’s program requirements, including clarifying EPA and state roles and responsibilities and the process for sharing information between EPA and the state; state responsibilities for expeditiously drafting, circulating, issuing, modifying, reissuing, and terminating permits, consistent with applicable regulations; state responsibilities to operate a timely and effective system to track well operator compliance with program requirements, including inspection procedures; state responsibilities for taking timely and appropriate enforcement action against persons in violation of program requirements, including use of effective enforcement tools such as penalties; and state reporting requirements, including the type and frequency of data to be reported, and EPA’s annual evaluation of the state program. According to EPA’s 1983 Program Oversight guidance, EPA-managed programs are also responsible for establishing systems to track well operator compliance; taking timely and appropriate action to resolve violations, including use of effective enforcement tools; and reporting data on the program. Injection well inspections, to discover and deter violations, and enforcement are identified in EPA’s 1987 Strategy as tools to achieve operator compliance with applicable requirements. According to the Strategy, each state and EPA-managed program should have a strategy for identifying how many wells it should inspect and the types of inspections to be conducted at these wells. The types of inspections that state and EPA inspectors conduct can vary from routine inspections that ensure that well sites are being properly maintained, to inspections that include observing pressure tests to determine if wells are structurally sound, known as mechanical integrity tests (see app. IV for information on types of inspections). The enforcement process begins once an inspector identifies a violation. Violations of UIC program requirements can involve a number of actions on the part of well operators, such as injecting fluids without authorization, injecting fluids at pressures above those permitted, or failing to show that a well holds pressure during testing (mechanical integrity testing). According to EPA’s Strategy, a state or EPA-managed program can take various enforcement actions when it finds wells that are violating program requirements. When inspectors identify wells that are violating applicable program requirements, they typically first notify the well operators of the violations. According to EPA’s Strategy, these notifications can be done through discussion or in writing. For more serious violations, state and EPA-managed programs can take stronger enforcement actions. According to EPA’s Strategy, state and EPA-managed programs are to escalate their enforcement response as needed to resolve violations, although the actions taken by a program may depend on a number of factors, including the severity of the violation and its potential to contaminate drinking water sources. Actions to gain compliance with program requirements can include sanctions, such as shutting down a well, assessing administrative penalties, or referring the matter for civil or criminal adjudication (see app. V for details of the enforcement process). EPA’s regulations and 1986 Program Reporting guidance direct state and EPA-managed programs to report specific information on class II wells to assist with program oversight. The Program Reporting guidance directs these programs to report data on inspections, violations, and enforcement actions. Specifically, the agency collects information from programs on different 7520 forms submitted by state and EPA-managed programs. According to the Program Reporting guidance, information on inspections conducted by state and EPA-managed programs is collected on 7520-3 forms and includes information on the total number of different types of inspections. According to this guidance, EPA also collects information on the number of significant violations and enforcement actions conducted by state and EPA-managed programs on 7520-4 forms. Under the Safe Drinking Water Act, EPA is required to notify primacy states of any violations of state UIC programs it discovers and, if a state does not take appropriate enforcement action within 30 days, issue an order or initiate legal action itself. According to EPA guidance, the 7520-4 forms collect information on individual significant violations that threaten underground sources of drinking water to help EPA determine whether it should intervene to enforce state or EPA requirements. In June 2014, we found that the data on violations and contamination of underground sources of drinking water that EPA collects from its 7520 forms were not sufficiently complete or comparable to allow EPA to aggregate state information and report on the status of the class II program nationally. We also found that EPA was developing a national UIC database to collect comparable, well-specific data from states, but that, as of January 2014, the database was not fully populated. We recommended in our June 2014 report that to support nationwide reporting goals until the national UIC database is complete, EPA develop and implement a protocol for states and regions to enter 7520 data consistently and for regions to check 7520 data for consistency and completeness to ensure that data collected from state and EPA-managed class II programs are complete and comparable for purposes of reporting at a national level. EPA agreed that there is room for improvement in the completeness and consistency of data submitted by the states and regions through the 7520 forms. In response to our recommendation, according to EPA officials, the agency has proposed updated 7520 instructions, intended to encourage consistent reporting by states and regions. The updated instructions have not been finalized and, according to EPA officials, cannot be used for reporting until they are approved by the Office of Management and Budget. EPA has also developed new standard operating procedures that update protocols for EPA regional review of 7520 reports submitted by state programs and headquarters review of 7520 reports submitted by EPA-managed programs. EPA’s regulations and 1983 Program Oversight guidance direct EPA headquarters and regions to conduct specific activities to ensure that the state and EPA-managed programs are protecting underground sources of drinking water. These activities include conducting annual on-site evaluations of state and EPA-managed programs. In addition, EPA regulations require the agency to incorporate state program requirements, and any changes to them, into federal regulations to be able to enforce them if necessary, and to approve aquifers for exemption from protection under the act, as appropriate, to allow injection of fluids. According to EPA’s Program Oversight guidance, EPA regional officials are expected to conduct annual on-site evaluations of state programs. These evaluations involve, among other things, an on-site meeting with state UIC officials to discuss program performance and can include a review of inspection and enforcement files, both of which are intended to help determine whether the state program is effective at protecting underground sources of drinking water. We found in June 2014, however, that EPA was not consistently carrying out annual on-site evaluations of state class II programs. According to EPA officials at the time, limited resources have prevented EPA regions and headquarters from consistently conducting on-site reviews, and some of the oversight activities identified in the Program Oversight guidance may no longer be needed. We recommended, and EPA agreed, that EPA should evaluate, and revise as needed, UIC program guidance on effective oversight to identify essential activities that EPA headquarters and regions need to conduct to effectively oversee state and EPA-managed programs. According to EPA regulations, EPA is also required to incorporate state program requirements and changes to those requirements into federal regulations. Under its regulations, EPA can only enforce state program requirements that it has incorporated into federal regulations. In June 2014, we found that EPA was not consistently incorporating state program requirements, or changes to state program requirements, into federal regulations, and as a result, EPA had not been able to enforce at least one state’s program requirements. To ensure that EPA maintains enforcement authority of state program requirements, we recommended that EPA conduct a rulemaking to incorporate state program requirements, and changes to state program requirements, into federal regulations and, at the same time, evaluate and consider alternative processes to more efficiently incorporate future changes to state program requirements into federal regulations without a rulemaking. EPA disagreed with this recommendation and said that in lieu of a single rulemaking, it was conducting an ongoing process of individual rulemakings to approve and codify state program revisions, as discussed later in the report. According to EPA regulations and EPA’s 2014 Aquifer Exemption Coordination memorandum, EPA is responsible for the final review and approval of all aquifer exemption requests. Well operators seeking an aquifer exemption to conduct injection activities in a state with primacy typically submit the exemption application to state program officials along with supporting information. State program officials are to review the application and, if the information submitted supports an exemption, submit a request to approve the exemption to the appropriate EPA regional office. Applicants in states with EPA-managed programs are to submit applications directly to the EPA region managing the program, and the region approves or disapproves the exemption applications. EPA regions are responsible for maintaining documentation supporting the decision to exempt an aquifer and a record of all exempted aquifers. According to the Aquifer Exemption Coordination memorandum, maintaining the decision memos and records underlying EPA’s approval or disapproval of exemption applications and standardized, readily available data on all existing aquifer exemptions is important to supporting informed decisions about uses for drinking water. Under the act, if EPA determines that a state program is no longer protecting underground sources of drinking water, the agency can revoke a state’s primacy by rule. According to EPA officials, before such a point is reached, the agency can work with the state to return the state’s program to compliance with EPA and state UIC class II regulations. For example, in July 2014, after California identified instances in which it had authorized injection into nonexempt aquifers, EPA determined that the state’s program was not in compliance with state and EPA requirements. In a series of letters from July 2014 through July 2015, EPA and the state’s Division of Oil, Gas, and Geothermal Resources reached agreement on a plan to improve California’s program. (See app. II for the details of the status of California’s program.) EPA has not collected inspection and enforcement information, or consistently conducted specific oversight activities, to assess whether state and EPA-managed programs are protecting underground sources of drinking water. EPA’s 1981 MOA guidance directs states and EPA regions to include provisions in memorandums with states to ensure that regional offices can collect the information and conduct the activities necessary for oversight, including (1) collecting information on inspections and enforcement actions and (2) conducting activities to incorporate approved changes to state program regulations into federal regulations, conducting annual on-site program evaluations, and reviewing and approving aquifer exemption applications. EPA’s Program Oversight guidance also states that EPA headquarters should collect the same information and conduct the same activities to oversee programs managed by EPA regions where applicable. EPA has not collected inspection and enforcement information that can be used to assess whether state and EPA-managed programs are effectively protecting underground sources of drinking water. EPA collects information from state and EPA-managed programs on the types of inspections they conduct, but the information EPA collects is at a summary level and not specific enough to assess whether states are meeting inspection goals established to protect underground sources of drinking water. In the 1987 Strategy, EPA provides guidance on the types of UIC inspections that state and EPA-managed programs should conduct and specifies minimum annual inspection goals (i.e., frequency of each inspection type) for state and EPA-managed programs. For example, (1) 100 percent of wells associated with emergency responses and public complaints should be inspected annually, (2) 25 percent of mechanical integrity tests conducted annually should be witnessed by an inspector, and (3) routine inspections to verify that wells are operating in compliance with applicable requirements should be conducted at least once every 5 years. According to the Strategy, state and EPA-managed programs should set goals for different types of inspections based on factors such as available resources and program priorities (see app. IV for additional information on EPA guidance on inspections and selected state inspection programs). EPA’s 1987 Program Reporting guidance states that the inspection data that EPA collects from state and EPA-managed programs should be used to track each program’s progress toward meeting its inspection goals, which are to be based on EPA’s minimum annual inspection goals. EPA’s minimum annual inspection goals are specified at the well level (e.g., 100 percent of wells associated with emergency responses). However, state and EPA-managed programs report annual summary data on the number of inspections conducted for each inspection type by state and not data on which wells were inspected, when they were inspected, the types of inspection conducted at each well, and the results of those inspections. For example, the summary data EPA collects on routine inspections, as shown in table 1, could not be used to determine if a state or EPA-managed program had conducted a routine inspection of each of its class II wells over a 5-year period or multiple inspections of individual wells. For the seven state and EPA-managed programs we reviewed, annual data reported to EPA included the total number of wells inspected and types of inspections conducted statewide, as shown in table 1 for fiscal year 2013. Because the inspection data that EPA has collected from states have not been well-specific and therefore have not included the total number of inspections by type that could have been done, EPA’s ability to track each state program’s progress toward meeting its inspection goals is limited. Under federal standards for internal control, managers need to compare actual performance to planned or expected results and analyze significant differences. EPA officials told us that they recognize that they cannot verify progress toward meeting state program inspection goals without well-specific data on inspections and have made efforts to collect well- specific data through voluntary programs, but do not require its collection. Starting in 2007, EPA had been working to develop a voluntary national UIC database to provide well-specific data from state and EPA-managed programs; however, according to EPA officials in December 2015, Montana was the only participating state program, and the agency plans to complete the national database with Montana and the seven EPA- managed programs currently participating. EPA officials said that they do not have well-specific information because they do not require it and most state programs have not provided it voluntarily through the national UIC database. However, EPA’s MOA guidance says that EPA may request and should be given access to all files necessary for evaluating the administration of the state program. Until EPA requires and collects well-specific data on inspections from state and EPA-managed programs, including the types of inspections conducted at each well, when the inspections were conducted, and the results of the inspections, the agency cannot assess whether the programs are meeting their annual inspection goals to protect underground sources of drinking water. EPA officials said that EPA will also have access to another voluntary database being compiled by the Department of Energy that contains additional data from state programs on injection wells. According to the officials, however, the department’s database does not provide well- specific information on inspections either. EPA has not collected consistent or complete enforcement information that can be used to assess whether state and EPA-managed programs are effectively protecting underground sources of drinking water. To carry out the Safe Drinking Water Act’s provision that EPA take action on violations that have not been enforced, EPA’s 1987 Strategy directs state and EPA-managed programs to take timely and appropriate enforcement action against significant violations of state or EPA requirements. The Strategy defines a timely and appropriate response taken by a state or EPA-managed program as resolving the violation or initiating a formal enforcement action within 90 days of the identification of the violation. To help ensure that violations are addressed in a timely and appropriate way, EPA’s 1987 Strategy and 1986 Program Reporting guidance call for state and EPA-managed programs to report information to EPA on significant violations that were not resolved within 90 days of discovery and also did not have a formal enforcement action taken against the well operator. The act requires EPA to enforce state program requirements within 30 days after the agency becomes aware that the state has not taken appropriate enforcement action. However, our review of data collected by EPA on significant violations demonstrated that EPA’s ability to take action may be limited by incomplete and inconsistent enforcement data reported by state and EPA-managed programs. Specifically, our analysis of 93 significant violations for fiscal years 2008 thru 2013 for the seven state and EPA- managed programs we reviewed found that there were 29 that were not resolved within 90 days of operator notification and for which formal action had not been taken within that time. According to the Strategy, each of these violations should have been reported on the 7520-4 form by the state to the appropriate EPA region or by the EPA-managed program to EPA headquarters. However, our analysis of the 7520-4 form data showed that state and EPA-managed programs reported 7 of these 29 violations to the agency. Table 2 shows the results of our analysis of the 7520-4 forms (see app. V for additional information on our analysis and app. VI for the full list of violations and enforcement actions taken). According to EPA headquarters, regional, and state officials we interviewed, state and EPA-managed programs used different interpretations of the Strategy and Program Reporting guidance to fill out the forms, resulting in incomplete, and potentially inconsistent, information across the programs. EPA headquarters officials told us that all significant violations that were not resolved within 90 days from the date the violation was discovered should be reported on the 7520-4 form quarterly until they are resolved, regardless of whether the program had already initiated enforcement action against the well operator. However, EPA’s Strategy and Program Reporting guidance call for programs to report, on the 7520-4 form, information on significant violations that (1) were not resolved within 90 days from the date the violation was discovered and (2) had not had a formal enforcement action taken against the well operator. In addition, according to the Program Reporting guidance, significant violations reported on the 7520-4 form should continue to be reported quarterly on subsequent 7420-4 forms until they are resolved. State and EPA officials we interviewed provided different interpretations of what they were to put on the 7520-4 form, which would result in some programs reporting significant violations and some not. Consistent with EPA’s Strategy and Program Reporting guidance, officials we interviewed from Ohio and EPA Region 4 (Kentucky) told us that they only report significant violations on the 7520-4 form that were not resolved within 90 days and for which a formal enforcement action had not been taken against the well operator. However, officials we interviewed from North Dakota, Oklahoma, Texas, and EPA Region 3 (Pennsylvania), told us that they report all unresolved significant violations regardless of whether they have taken a formal enforcement action. According to the officials, the information that ultimately gets reported on the 7520-4 form is based on a quarterly calculation of how long the well has been out of compliance; however, according to the officials, EPA only requires state and EPA-managed programs to submit 7520-4 forms to EPA semiannually. In addition, officials in North Dakota and Oklahoma told us that they only report significant violations once and not in subsequent quarters, even if the violations have not been resolved. EPA headquarters officials told us they are aware that the information reported by states and EPA regions is not complete or consistent, but they have not clarified, in guidance or otherwise, what information should be reported. EPA headquarters officials told us that regions are responsible for ensuring that state and EPA-managed programs take timely and appropriate enforcement actions, and that regions generally assess the programs’ enforcement response on a case-by-case basis through informal communications with state program staff. The information received on the 7520-4 form, however, is the only documented information reported to EPA regions and headquarters on individual violations that may not have been enforced in a timely or appropriate manner. Until it clarifies guidance on what data should be reported on the 7520-4 form, EPA does not have reasonable assurance that state and EPA-managed programs report complete and consistent information on unresolved significant violations or that it has the information it needs to assess whether it must take enforcement action, as directed under the act, to protect underground sources of drinking water. EPA has not consistently conducted three oversight activities necessary to assess whether state and EPA-managed programs are protecting underground sources of drinking water, as required by regulations and specified in guidance: (1) incorporation of state program requirements, or changes to state program requirements, into federal regulations; (2) the final review and recordkeeping for all aquifer exemption applications it approves; and (3) annual on-site program evaluations. We found in June 2014 that EPA had not consistently incorporated state program requirements, or changes to state program requirements, into federal regulations, as required by agency regulations. Specifically, if a state does not enforce a requirement against an injection well operator violating state regulations, EPA can take enforcement action if EPA has approved the state regulations being violated and incorporated them into federal regulations, and has met specific procedural requirements. EPA regulations and guidance establish a process for EPA and its regions to review and approve state programs, as well as changes to state programs. Under its regulations, EPA can only enforce state program requirements that it has incorporated into federal regulations through a rulemaking process. Where it has not done so, EPA is not able to enforce state program requirements if needed. In June 2014, we found that EPA had not yet incorporated changes to some state program requirements into federal regulations and therefore did not have the ability to enforce these state program requirements if necessary. We concluded that until it conducts a rulemaking to incorporate the backlog of state program requirements and changes to state program requirements that have been approved, EPA would not be able to enforce some state program requirements, hindering its ability to protect underground sources of drinking water. To ensure that EPA maintained enforcement authority of state program requirements, we recommended that EPA conduct a rulemaking to incorporate state program requirements, and changes to state program requirements, into federal regulations. We also recommended that at the same time, EPA evaluate and consider alternative processes to more efficiently incorporate future changes to state program requirements into federal regulations without a rulemaking. In comments responding to our June 2014 report, EPA disagreed with our recommendation to conduct a rulemaking and said that a single rulemaking would be impractical because the process would take many years to complete and would still not ensure that all program changes were incorporated into federal regulations, as other states could make changes to their programs during this time. In lieu of a single rulemaking, EPA said in its comments that it was conducting an ongoing process of individual rulemakings to approve and codify state program revisions in collaboration with states, EPA regions, and EPA’s Office of Enforcement and Compliance Assurance. However, as stated in our June 2014 report, according to an analysis conducted by EPA in 2010, EPA estimated that it would take 2 to 3 years, dedicated EPA personnel, and $150,000 in outside contractor support to identify, approve, and conduct a single rulemaking to incorporate all state program changes made since 1991 into federal regulations. By EPA’s own estimate, the targeted state- by-state approach will take much longer than a single rulemaking and will face greater challenges with states continuing to make changes in the interim, leaving EPA without the ability to enforce state programs to protect underground sources of drinking water if needed. EPA provided no evidence in its comments that individual rulemakings would be any less costly or any more efficient than the approach it assessed in 2010. As of December 2015, EPA has not taken action to incorporate state program requirements, or changes to state program requirements, into federal regulations. EPA is also responsible for the final review, approval, and recordkeeping for all aquifer exemption applications, but the agency does not have the location or supporting documentation necessary to identify the size and location of all aquifers for which it has approved exemptions from protection under the act. According to EPA’s 2014 Aquifer Exemption Coordination guidance, EPA regions need to have complete records documenting support for EPA’s approval or disapproval of exemption applications to inform decision making by state and EPA-managed programs on injection well permits. According to EPA officials, regional offices generally maintain the most comprehensive and up-to-date data on aquifer exemption approvals. Since 2003, EPA has worked to compile comprehensive information on aquifer exemptions, including data on the aquifers’ sizes and locations. In 2011, EPA determined that its headquarters did not have information on all exempted aquifers and requested that EPA regional offices provide information on all aquifers exempted in their respective regions to help compile a centralized database. According to EPA officials, the agency has compiled a rudimentary database from regional datasets, paper files supporting aquifer exemption decisions, and hard copies of maps specifying the size and location of exempted aquifers. However, EPA officials said that the database of aquifer exemptions does not include complete information on each exemption listed and that EPA does not have a complete inventory of exemptions. In particular, according to EPA officials, the agency is missing information on exemption decisions made when state programs were granted primacy in the 1980s because the supporting documentation is not readily accessible or was damaged while in storage. If EPA had maintained an updated database on aquifer exemptions, then EPA Region 9 may have had the information it needed to review injection well permits to determine whether injections were being made into exempted aquifers in California. Instead, California discovered that it had authorized injection into nonexempt aquifers. Specifically, EPA requested additional information on aquifer exemptions from California in 2012 as a part of EPA’s review of historical data on aquifer exemptions nationwide. At that time, the state reviewed supporting documentation for the aquifer exemptions and the associated injection wells and determined that it had permitted operators to inject into nonexempt aquifers that the state believed were exempted in the 1980s, when EPA granted primacy to California to manage the class II program. In July 2014, after identifying water supply wells in the vicinity of some of these injection wells and informing EPA Region 9, California ordered operators of those injection wells to cease injection into certain nonexempt aquifers, and to submit data to California so the threat to underground sources of drinking water and human health could be assessed. In July 2014, as a result of this issue, EPA determined that California’s program was not in compliance with state and EPA requirements and supported California’s plan to review injection wells that were permitted to inject into nonexempt aquifers. As of October 2015, California had identified over 500 wells injecting into 11 nonexempt aquifers with the potential to threaten underground sources of drinking water, and 23 of those wells had been shut-in, or ceased injecting fluids. In November 2015, California shut-in an additional 33 injection wells injecting into nonexempt aquifers. As of October 2015, California officials said that they are continuing to collect information on wells injecting into nonexempt aquifers to determine if additional wells should be shut-in to protect underground sources of drinking water and are working with EPA Region 9 to collect additional information on aquifer exemptions to help complete EPA’s database. As of December 2015, EPA officials told us that the majority of aquifers in its database of approved exemptions have complete size and location data and that headquarters continues to collect information from the regions and state programs to fill in the remaining data gaps and ensure that the database is complete and accurate. The officials told us that for this reason, it is unlikely that they will discover deficiencies in recordkeeping for approved aquifer exemptions similar to those identified in California. However, while EPA officials believe that they have the majority of the data on aquifer exemptions, the database does not include some historical data on exemption decisions made when state programs were granted primacy in the 1980s. In addition, the database only has aquifer exemption data through 2011 and is missing data on aquifer exemptions approved over the past 4 years. According to EPA officials, the database is a headquarters-based spreadsheet and updates with new approvals on aquifer exemptions will need to be collected from EPA regions and entered manually. The officials also said that EPA will complete the database using 2011 data and only plan to add updated data if sufficient resources are available. Until it has a complete aquifer exemption database and a way to update it periodically, EPA does not have sufficient information on aquifer exemptions to oversee state and EPA-managed programs and assess whether programs are protecting underground sources of drinking water. As we reported in June 2014, EPA has not consistently conducted annual on-site program evaluations, as directed by its 1983 Program Oversight guidance. This guidance directs EPA regions and headquarters to conduct annual on-site program evaluations of state and EPA-managed programs, which it characterizes as a key activity necessary for effective oversight, and to ensure that state and EPA-managed class II programs protect underground sources of drinking water. According to EPA’s Program Oversight guidance, EPA regions should perform at least one on-site evaluation of each state program each year to assess whether the state is managing the program consistent with state regulations, setting program objectives consistent with national and regional program priorities, and implementing recommendations from previous evaluations, among other activities. According to the Program Oversight guidance, annual on-site evaluations of state programs should also include a review of permitting and inspection files or activities to assess whether the state program is protecting underground sources of drinking water. In particular, because permitting files should include information on the well location, and the geology and aquifers in the area surrounding the injection well, a review of permitting files should cover this information. EPA headquarters is responsible for conducting similar on-site program evaluations of EPA-managed programs. In our June 2014 report, regional officials said that on-site program evaluations are valuable for coordinating between federal and state officials to improve program management. According to EPA officials at the time, however, limited resources have prevented regions, and EPA headquarters, from consistently conducting on-site program evaluations. To ensure effective oversight of the class II program, in June 2014, we recommended, and EPA agreed, that EPA evaluate and revise, as needed, UIC program guidance on effective oversight to identify essential activities that EPA headquarters and regions need to conduct to effectively oversee state and EPA-managed programs to ensure that they were effective at protecting underground sources of drinking water. If EPA had conducted oversight activities, such as annual on-site program evaluations, EPA Region 9 may have discovered that California’s class II program did not comply with state and EPA requirements before 2014. In particular, regular on-site program evaluations that included reviews of permitting files may have identified the deficiencies in California’s program. Specifically, reviews of well permitting files, including well location and information on aquifers surrounding the well, may have helped identify injections into nonexempt aquifers when compared to complete records on aquifer exemptions. However, according to EPA Region 9 officials, they have not conducted annual on-site evaluations of California’s program. In 2011, regional officials requested a third-party audit of California’s program, which was the first comprehensive review of California’s program since primacy was granted in 1983. The audit found several program deficiencies, including inadequate inspection and enforcement practices and insufficient staff to adequately manage and implement the program, but Region 9 did not have complete information on approved aquifer exemptions in California and did not conduct a review of permitting files and aquifers in the area surrounding injection wells to identify wells that California had authorized to inject into nonexempt aquifers. According to EPA officials, in response to the recommendation from our June 2014 report for EPA to update its guidance on effective oversight, EPA headquarters and regional officials have held preliminary discussions to determine what oversight activities are necessary to ensure that state and EPA-managed programs are effective at protecting underground sources of drinking water, including on-site evaluations of state and EPA-managed programs. Concerning why annual on-site reviews had not been consistently conducted, EPA headquarters and regional officials said that they have few resources to oversee state and EPA-managed programs, and regional officials told us that available resources are directed toward the class II programs they manage directly and not oversight of state programs. EPA headquarters officials we interviewed said that they have an effective oversight program and conduct necessary activities with the resources available. The same officials said they do not have the resources, including the workforce, necessary to consistently conduct the oversight activities to help assess whether state and EPA-managed programs are complying with applicable requirements. According to a key workforce planning principle from our body of work on strategic human capital management, an agency should determine the critical skills and competencies that will be needed to achieve current and future programmatic results, particularly given factors that change the environment within which agencies work, such as budget constraints. Our body of work on strategic human capital management indicates that each agency needs to ask if it has an explicit workforce planning strategy linked to the agency’s strategic and program planning efforts to identify its current and future human capital needs, including the size of the workforce; its deployment across the organization; and the knowledge, skills, and abilities needed for the agency to pursue its shared vision. In November 2015, EPA officials said that the agency had not conducted a comprehensive workforce analysis to identify the resources necessary, including human capital resources, to oversee state and EPA-managed programs, and that the agency had not requested additional resources for oversight. Without conducting such an analysis, EPA will not be able to identify the human capital and other resources it needs to carry out its oversight of state and EPA-managed programs and help ensure that they are effective at protecting underground sources of drinking water. EPA established the UIC class II program in in the 1980s, with a vigorous role for the agency to oversee state and EPA-managed programs to prevent contamination of underground sources of drinking water. However, the findings in our June 2014 report, our findings on inspection and enforcement information and oversight activities in this report, and the recent decision that California’s program was not complying with state and EPA requirements illustrate that EPA does not have the information, or consistently conduct the oversight activities, needed to assess state and EPA-managed class II programs to help ensure that they protect underground sources of drinking water. Specifically, the data EPA requires and collects from state and EPA- managed programs do not include well-specific information on inspections conducted by those programs needed to track each program’s progress toward meeting its annual inspection goals, as called for in EPA’s Program Reporting guidance. Until EPA requires and collects well-specific data on inspections from state and EPA-managed programs, including when wells were inspected, the types of inspections conducted at each well, and the results of those inspections, the agency does not have the well-specific information to assess whether the programs are meeting annual inspection goals to protect underground sources of drinking water. To assess whether state and EPA-managed programs are effectively protecting underground sources of drinking water when permitting fluids to be injected into aquifers, EPA needs complete, updated information on approved aquifer exemptions. Yet EPA does not have a complete, up-to- date database on aquifer exemptions for all state and EPA-managed programs, or a way to keep the database containing information on aquifer exemptions updated. Until it has a complete aquifer exemption database and a way to update it, EPA does not have sufficient information on aquifer exemptions to oversee state and EPA-managed programs and assess whether programs are effectively protecting underground sources of drinking water. Moreover, under the Safe Drinking Water Act, EPA must enforce state program requirements if they have not been enforced by the state in a timely and appropriate fashion. However, because of inconsistent interpretations of reporting guidance, state and EPA-managed programs report inconsistent and incomplete information on individual significant violations that have not been resolved, and therefore EPA regions and headquarters cannot know about, let alone take enforcement action against, operators committing significant violations. Until it clarifies guidance on what data should be reported on the 7520-4 form, EPA does not have reasonable assurance that state and EPA-managed programs report complete and consistent information on unresolved significant violations or that it has the information needed to assess whether it must take enforcement action, as required under the act, to protect underground sources of drinking water. Finally, although EPA headquarters officials said they do not have the resources necessary to conduct the oversight activities needed to assess whether state and EPA-managed programs comply with applicable requirements, the agency has not conducted a workforce analysis to identify the resources, including human capital resources, the agency needs to oversee state and EPA-managed programs. Without conducting such an analysis, EPA will not be able to identify the human capital and other resources it needs to oversee state and EPA-managed programs and help ensure that they are effective at protecting underground sources of drinking water. To help ensure protection of underground drinking water from the injection of wastewater associated with domestic oil and gas production, we recommend that the Administrator of the Environmental Protection Agency take the following four actions: Require and collect well-specific data on inspections from state and EPA-managed programs, including when the wells were inspected, the types of inspections conducted, and the results of the inspections in order to track progress toward state and EPA-managed annual inspection goals. Complete the aquifer exemption database and establish a way to update it to provide EPA headquarters and regions with sufficient information on aquifer exemptions to oversee state and EPA- managed programs. Clarify guidance on what data should be reported on the 7520-4 form to help ensure that the data collected are complete and consistent across state and EPA-managed programs and to provide the information EPA needs to assess whether it must take enforcement actions. Conduct a workforce analysis to identify the human capital and other resources EPA needs to carry out its oversight of state and EPA- managed programs. We provided the Administrator of EPA with a draft of this report for review and comment. In written comments provided by EPA (reproduced in app. VII), EPA generally agreed with our analysis and findings on the class II program and described planned actions, but disagreed with some findings and recommended actions, as discussed below. EPA also provided technical comments that we incorporated in the report, as appropriate. In addition, we provided the draft report to the six states whose programs we reviewed. Officials from these states—California, Colorado, North Dakota, Ohio, Oklahoma, and Texas—provided technical comments, which we incorporated as appropriate. In response to our first recommendation that EPA require and collect well- specific data on inspections from state and EPA-managed programs to track progress toward state and EPA-managed annual inspection goals, EPA stated that the agency’s goal is to obtain high quality data to understand program activities at the well-specific level, but that it didn’t make sense to require the states to submit well-specific data now. EPA said that it is mindful of the need to think carefully about requiring information from states, and it will continue to work with its state partners to improve both the collection and the quality of the data currently required and to expand EPA’s access to additional state data. Specifically, EPA stated that it had taken steps to address the gaps in the summary data collected on 7520 forms identified in GAO’s June 2014 report, including developing standard operating procedures for submission and review of the data forms, and revising instructions to increase consistency in reporting the data to EPA. EPA said that it plans to continue to increase the inventory of well-specific data in the national UIC database including states that were working towards e-reporting status, and that EPA welcomes and encourages further participation. Further, EPA stated that it will continue to work with the Department of Energy and other stakeholders as they develop a database with well- specific state inventory data. We recognize EPA’s efforts to improve the consistency and completeness of summary data collected on the 7520 forms, and to collect additional well-specific data through voluntary programs such as the national UIC database and the Department of Energy’s database, but EPA has made little progress since 2007 collecting well-specific inspections data from state programs voluntarily. As we stated in the report, EPA needs access to well-specific inspections data from all programs to track the progress of state and EPA-managed programs towards meeting their inspection goals. If EPA believes that well-specific data is important, it should require that state and EPA- managed programs report well-specific data on inspections. In response to our second recommendation that EPA complete the aquifer exemption database and establish a way to update it to provide EPA headquarters and regions with sufficient information on aquifer exemptions to oversee state and EPA-managed programs, EPA disagreed with our assessment that the agency is deficient in its duties to maintain aquifer exemption records, but is taking action to complete the database and to update it. Specifically, EPA stated that the draft report presents incomplete information as to which materials are held at the EPA headquarters and regional levels, and the roles and objectives that EPA headquarters and regions play regarding aquifer exemptions and the use of data. EPA said that our statement that the agency does not have sufficient information to oversee state and EPA-managed programs is incorrect because its regions have the most comprehensive and current data on aquifer exemptions as they conduct the final review of exemption requests and must approve all exemptions. According to EPA, it initiated the effort to collect data from the regional offices to better understand the number, locations, and nature and quality of aquifers exempted by the UIC program and expects to release a public data set by the end of 2016, which will include data current through 2015 with the exception of Region 9’s data for the State of California. EPA stated that it anticipates adding Region 9's aquifer exemption data for California as the region works with the state to clarify the boundaries of the agency's historic approvals and takes action on the state's requests for new exemptions. Further, EPA said it plans to update the data set annually and that the regions will continue to hold the most current data. We commend EPA’s efforts to develop an up-to-date data set of aquifer exemptions and note that the updated information is important for overseeing whether the regions have current information on aquifer exemptions. As shown in the situation in Region 9 with California, at least one region did not have current or comprehensive information on aquifer exemptions. Further, EPA has been working since 2003 to compile comprehensive information on aquifer exemptions from regions, and, according to EPA officials, does not have a complete inventory of exemptions. In light of the situation in Region 9, until EPA has a complete aquifer exemption database and a way to update it, we continue to believe that it does not have sufficient information on aquifer exemptions to oversee state and EPA-managed programs and assess whether programs are protecting underground sources of drinking water. In response to our third recommendation that EPA clarify guidance on what data should be reported on the 7520-4 form to help ensure that the data collected are complete and consistent across state and EPA- managed programs and to provide the information EPA needs to assess whether to take enforcement action, EPA agreed that the continued improvement in collection and consistency of data via the 7520-4 form would be valuable for more effective oversight. Specifically, EPA stated that the form is a tool for obtaining important information used in assessing enforcement activities and that providing guidance on the 7520-4 form could be valuable to improve the quality of information the agency receives. EPA also said that the 7520 standard operating procedures that it created in response to our June 2014 report reminds reviewers that wells with significant violations for two or more quarters should remain listed on the 7520-4 until the issue is resolved. In addition, EPA said that it will provide further materials to UIC data submitters to improve completeness and consistency of the data that programs report on the 7520-4 form within 6 months of this final report. As these standard operating procedures have not yet been finalized, we have not assessed them to determine whether they meet the intent of our recommendation. In response to our fourth recommendation that EPA should conduct a workforce analysis to identify the resources it needs to conduct effective program oversight, EPA agreed that oversight is an important aspect of ensuring an effective UIC program, but stated that a workforce analysis was not necessary to better assess the resources needed to oversee the implementation of the UIC class II program. EPA stated that it is working with program managers to evaluate the effectiveness of EPA’s oversight activities in response to our June 2014 report, and would expand the evaluation to include elements of inspection and enforcement activities if necessary. Upon completion of its evaluation, EPA said that it would look to improve the effectiveness of state and EPA oversight of the UIC programs, if needed. EPA may, for example, pilot a project to explore the potential to ensure program implementation by use of remote approaches, such as data collection, data analysis, targeting and priority ranking, and public transparency, as a viable option for increased oversight. While we recognize EPA’s commitment to assess whether it should expand its evaluation of oversight activities to include inspections and enforcement, we still believe it is critical for EPA to identify the resources necessary, including human capital resources, to oversee state and EPA-managed programs and that without doing so, EPA may not have reasonable assurance that it can effectively collect information or conduct activities to ensure protection of underground sources of drinking water. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Administrator of the Environmental Protection Agency, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VIII. Table 3 provides a list of programs managed by the Environmental Protection Agency (EPA), state programs with safeguards deemed effective by EPA, state programs that have adopted minimum federal underground injection control requirements, and the number of class II wells in each state in 2014. According to a 2015 letter from California to Environmental Protection Agency (EPA) Region 9, California is the nation’s third largest oil- producing state, producing 575,000 barrels per day, and the state’s oil and gas industry earns $34 billion annually. Injection wells have been used in the state for more than 50 years. According to a 2015 report, currently over 50,000 injection wells are operating in California, with about 75 percent of the state’s production coming from enhanced oil recovery methods using underground injection wells. California’s class II underground injection control (UIC) program is managed by the Division of Oil, Gas, and Geothermal Resources (Division) and is divided across the Division’s six district offices. The majority of class II underground injection activity occurs in District 1 (Cypress) and District 4 (Bakersfield). In July 2014, EPA Region 9 determined that the UIC class II program managed by the Division did not comply with state and EPA requirements. In a series of letters from July 2014 through July 2015, EPA Region 9 and the Division reached agreement on a plan to improve California’s UIC class II program. Below is a summary of the deficiencies identified in California’s UIC class II program and the plans California and EPA Region 9 agreed on to resolve these deficiencies, including actions taken by EPA and California before and after the determination of noncompliance in 2014. In 2011, EPA requested a third-party audit of the state’s UIC class II program. The audit made recommendations to improve California’s class II program, including recommendations regarding the program’s definition of underground sources of drinking water, area of review calculations, well construction practices, inspection and enforcement practices, and staff qualifications. In November 2012, the Division developed an action plan to address each of the recommendations from EPA’s audit. To address a number of recommendations necessitating regulatory updates, the Division committed to update its class II program regulations beginning in 2013. In response to an EPA inquiry initiated in 2012, California reviewed program records to ensure that injection wells the state authorized aligned with EPA-approved aquifer exemptions. In doing so, in 2014 the Division discovered that it authorized operators to inject class II wastewater into 11 nonexempt aquifers in the vicinity of water supply wells, and EPA determined that the program was not in compliance with state and EPA requirements. In October 2015, the Division issued the first report from its Monitoring and Compliance Unit, which was created in 2011. The report identified a number of program deficiencies, including insufficient staffing to address increasing regulatory workload and significant remedial programmatic work; poor recordkeeping on mostly paper forms and a lack of modern data tools and systems; outdated regulations that in some cases do not address the modern oil and gas extraction environment; inconsistent and understaffed program leadership; insufficient breadth and depth of technical talent; insufficient coordination among district and state offices; and lack of consistent, regular, high-quality technical training. Division officials also identified deficiencies with the enforcement of class II requirements. Division officials said that the state office receives violation information from districts and is responsible for pursuing enforcement actions against operators and collecting penalties assessed by the Division. However, according to Division officials, California has historically had difficulties enforcing regulations for both production and class II wells in the state. In particular, the Division identified many examples of enforcement actions that were not pursued and wells that were not being returned to compliance in a timely manner. For example, in 2010, the Division hired a contractor to review its accounts receivable to identify outstanding penalties that the Division had not collected. According to Division officials, there were over $5 million in unpaid penalties that the Division had assessed but did not collect. In September 2015, according to Division officials, the Division hired a deputy supervisor to start tracking enforcement of state requirements and to lead the development of new business processes to improve violation tracking and enforcement. Since July 2014, the Division, California’s State Water Resources Control Board (Board), and EPA have been working together to systematically address a number of important deficiencies in the UIC program, including permitting injection into nonexempt aquifers. In letters between California (the Division and the Board) and EPA, the three-agency group agreed to a plan for the Division to shut down wells permitted to inject into nonexempt aquifers and improve and modernize its UIC practices. Specifically, the plan consists of four major components to be completed concurrently: New regulations and program revisions. The Division determined that many state regulations that govern underground injection control are obsolete, deficient, or unable to address current industry practice. According to agency documents, the Division plans to undertake a series of rulemakings to improve California’s regulatory framework to address these issues, including isolation of injected fluids, quality of water to be protected, well construction practices, cyclic steam operations, project review, and idle well standards and testing. In July 2015, the Division stated that it planned to update its class II regulations in two phases, with the first phase starting with the informal circulation of draft regulations in the fall of 2015 and the second phase beginning in 2016. Well review and aquifer exemptions. The Division and the Board have been systematically reviewing injection wells that may have been permitted to inject into nonexempt aquifers. The Division has proposed a schedule for reviewing and ceasing injection into these aquifers. As of October 2015, the Division shut down 23 wells injecting wastewater into underground sources of drinking water that may have posed an immediate risk to waters of beneficial use. Over the next 2 years, through 2017, according to agency documents, the Division will review additional injection wells to determine whether they should be shut down or continue operating. The Division is collecting information from operators interested in pursuing exemptions and will review each exemption application to determine whether exemption criteria have been sufficiently met. If the Division approves the aquifer exemption, it will forward it to EPA for review and approval or disapproval. EPA has final authority to declare an aquifer exempt. The Division has issued regulations to ensure that injection activity ends by specified deadlines unless aquifer exemptions are approved. Project-by-project review of injection project approvals. The Division plans to conduct individual project reviews designed to find missing data, identify UIC compliance issues, and compare existing project approvals with current conditions in the field. Operators will be required to provide missing data, and the Division will reevaluate the project based on all relevant regulations, mandates, and policies, including demonstration of zonal isolation of injected fluids. Projects will be reapproved, modified, or canceled as appropriate. The Division plans to conduct separate reviews in each Division district and plans to complete the review by October 2018. Development of a modern well and data management system. The Division is updating its data management systems for production and injection wells to improve regulatory compliance and effectiveness, transparency, and support of all stakeholders. Finishing every component of the UIC improvement plan submitted to EPA could take 3 to 4 years. However, according to state documents, as each piece is completed, improvements in the Division’s mission performance will follow. According to state documents, changes will be supported by the development of training programs to support the process of internal review and adjustments for continuously improving the Division’s execution of its responsibilities. This report examines the Environmental Protection Agency’s (EPA) Underground Injection Control (UIC) class II program to determine the extent to which EPA has collected the inspection and enforcement information needed, and conducted the oversight activities necessary, to assess that state and EPA-managed programs are protecting underground sources of drinking water. To perform this work, we reviewed and analyzed the Safe Drinking Water Act, and EPA regulations and guidance applicable to the UIC class II program. We also interviewed EPA UIC program officials in the eight regional offices with class II wells. To understand the class II program at the state level, we interviewed state officials and reviewed state program documentation for the same sample of states from our June 2014 report on the UIC program. Specifically, we selected a nongeneralizable sample of eight states with class II programs. Two of these states are managed by EPA regions—Kentucky and Pennsylvania—and the remaining six—California, Colorado, North Dakota, Ohio, Oklahoma, and Texas—are managed under provisions of the act that allow them to have primary responsibility to manage the program in their states. We selected these states from the six shale oil and gas regions defined by the Energy Information Administration. For each of the six shale regions, we selected at least one state that had among the highest number of class II injection wells. In July 2014, after we issued our June 2014 report and before we started the work on this review, EPA determined that one of the programs in the eight states we reviewed, California’s class II program, was not in compliance with state or EPA requirements. EPA Region 9 officials and California’s UIC program officials have since agreed to a plan to improve the California program over the next several years. We interviewed EPA headquarters, EPA Region 9, and California officials regarding the deficiencies in California’s program, the agreed-upon improvement plan, and EPA oversight of California’s progress. A summary of the deficiencies found by EPA and California, and California’s plans to improve its program, can be found in appendix II. Because of the deficiencies in California’s program, we chose not to include California in our detailed analysis of inspection and enforcement information from the states. Thus, the results of our review of inspection and enforcement reflect the seven states remaining in our sample. Because the sample is a nongeneralizable sample, our results cannot be generalized to other states but do provide detailed examples of EPA’s and states’ management of class II programs. To analyze whether EPA collects the information it needs to assess whether state and EPA-managed programs are protecting underground sources of drinking water, particularly inspection and enforcement information, we first reviewed EPA regulations and guidance on UIC inspections and enforcement to determine what information EPA needs to assess the programs and their ability to protect underground sources of drinking water. EPA’s 1987 guidance document Underground Injection Control Program Compliance Strategy for Primacy and Direct Implementation Jurisdictions (Strategy) establishes minimum goals for inspections of class II wells. We obtained and summarized inspections data collected by EPA from each program we reviewed for fiscal year 2013, the most current year of data available at the beginning of this review. The state and EPA-managed programs are directed to report these data to EPA quarterly on the 7520-3 form. To assess the reliability of these data, we interviewed EPA and state officials about their processes for managing the data collected on the 7520-3 forms and tested the data for completeness. We found that the data were not comparable across states but were sufficiently reliable for reporting on a state-by-state basis. To understand EPA’s use of the data to assess state and EPA-managed programs, we interviewed officials from EPA headquarters about their use of the information to oversee EPA-managed programs and from EPA regions about their oversight of inspections conducted by state programs. We also interviewed selected state program officials about how they manage class II inspections, and we requested information on annual inspection goals and inspection strategies. Similarly, we interviewed regional office staff responsible for managing the programs in Kentucky and Pennsylvania about their management of the class II programs in these states, including any inspection goals and strategies they have. To analyze whether EPA has the enforcement information to assess whether state and EPA-managed programs are protecting underground sources of drinking water, we reviewed EPA’s Strategy, which also establishes enforcement expectations for both state and EPA-managed programs. In particular, the Strategy identifies the need for state and EPA-managed programs to conduct timely and appropriate enforcement actions. Specifically, state and EPA-managed programs are expected to resolve significant violations within 90 days of discovery or take a formal enforcement action against the well operator. According to the Strategy, a formal enforcement action, among other things, is legally enforceable, explicitly requires the well owner to take corrective action, and specifies a timetable for completion. Under the act, EPA is to intervene and take enforcement action once it is notified that a violation has occurred and that the state has not taken appropriate action after 30 days. Similarly, EPA regions should take timely and appropriate enforcement actions in states with EPA-managed programs. According to EPA’s 1987 Reporting Requirements—Underground Injection Control Program Guidance (Program Reporting), EPA uses the 7520-4 forms to evaluate the timeliness and appropriateness of a state or EPA-managed program’s enforcement response; EPA regions receive 7520-4 forms from their state programs, and EPA headquarters collects information on the 7520-4 forms from programs managed by EPA regions. We then assessed a sample of violations, using EPA’s definition of timely and appropriate resolution from its Strategy and Program Reporting guidance, to determine if EPA receives information on individual significant violations that may have the potential to threaten underground sources of drinking water. We selected a nongeneralizable sample of 134 notices of violation, issued from 2008 through 2013 (the most recent years of data available when we began our audit work), from the seven state and EPA-programs we reviewed and compared the data to enforcement data provided to EPA on the 7520-4 forms. We selected a nongeneralizable sample of at least six notices of violation in each of the seven states in our sample based on the significance of the violation; the type of enforcement action taken; and the number of days between when the operator was notified and when the violation was resolved, termed returning to compliance with applicable requirements. We also obtained the 7520-4 forms for fiscal years 2008 through 2013 to identify what violations had been reported on these forms. We analyzed the number of days that each significant violation in our sample had been open and compared this to the number of days (90) established by EPA as timely. We then analyzed each violation to determine if a formal action had been taken. We identified 93 violations that were open for more than 90 days and compared these to the information reported on the 7520-4 form by the appropriate state. We then interviewed EPA, regional, and state officials to determine how they reported the information on the 7520-4 form. Because our sample of violations is nongeneralizable, our results cannot be generalized to other states and violations; however, they do provide detailed information on the violations that should have been reported by state and EPA-managed programs. To assess the reliability of the violation and enforcement information we obtained, we interviewed EPA headquarters officials about their processes for collecting and managing the information and tested the information for completeness by looking for missing information. We determined that the information from EPA’s reporting forms was reliable for purposes of reporting individual state results. To analyze the activities EPA conducts to assess whether state and EPA- managed programs protect underground sources of drinking water, we reviewed several EPA guidance documents that describe activities EPA is to take to oversee state and EPA-managed programs. EPA’s 1983 guidance document, Interim Guidance for Overview of the Underground Injection Control Program, states that EPA is supposed to conduct annual on-site evaluations of state and EPA-managed programs. EPA’s UIC regulations describe activities that EPA is supposed to conduct to ensure that it can enforce state program requirements, if necessary. EPA’s 1984 guidance, Guidance for Review and Approval of State Underground Injection Control Programs and Revisions to Approved State Programs, describes the activities that EPA is to conduct to review changes to state program requirements. We reviewed the extent to which EPA conducted the first two activities in our June 2014 report on the UIC program. We met with EPA headquarters officials to discuss our findings from that report and EPA’s efforts to implement our recommendations. To analyze the extent to which EPA has carried out activities to review and approve aquifer exemptions for state and EPA-managed programs, we reviewed EPA guidance documents on aquifer exemptions. We then interviewed EPA headquarters officials about EPA’s progress developing and maintaining a database on aquifer exemptions. To analyze the extent to which EPA applied best practices for workforce planning and strategic human capital management to the management of the UIC program, we reviewed GAO reports specifying best practices for strategic human capital management. We then interviewed EPA’s headquarters officials about EPA’s efforts to apply those best practices to the UIC program. We conducted this performance audit from October 2014 to February 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Safe Drinking Water Act requires states to include inspection requirements in their programs. In a 1987 document titled Underground Injection Control Program Compliance Strategy for Primacy and Direct Implementation Jurisdictions (Strategy), the Environmental Protection Agency (EPA) provides guidance on the types of inspections and the frequency at which they should be conducted. The types of inspections include routine inspections, well construction inspections, mechanical integrity tests witnessed, emergency and complaint response, enforcement follow-up, and plugging and abandonment verification. According to the Strategy, the goal of an inspection program is to determine that a well is in compliance with applicable requirements and to detect any violations of those requirements. It directs programs to adopt minimum priority standards for each type of inspection and gives the programs discretion to consider additional priorities, such as environmental risks, population risks, and well construction, when determining which wells to inspect. The Strategy ranks inspection types, including those for class II wells, by priority, as shown in table 4. The seven state and EPA-managed programs we reviewed establish goals for each of the inspection types identified in the Strategy based on program priorities and available inspection resources. Table 5 shows state and EPA-managed program inspection goals by inspection type. Some states have goals to inspect all of their wells monthly or quarterly. For example, North Dakota program officials told us that their goal is to conduct routine inspections at all class II injection wells monthly, and Ohio has a goal of inspecting 100 percent of its wells quarterly, according to program officials. Other programs we reviewed do not set specific annual goals for individual types of well inspections. For example, according to EPA Region 4 officials, EPA Region 4 has a goal of conducting routine inspections of all of the class II wells in Kentucky at least once every 5 years and does not set inspection goals for observing well plugging and well construction. Some state and EPA program officials told us that when states do not have the resources to inspect all wells annually, type and frequency of inspections are prioritized based on risk factors such as the operator’s history of compliance with state or EPA requirements or danger to the general public. For example, Oklahoma officials told us that inspectors will inspect an operator more frequently if the inspector determines the well operator is violating state requirements and will also prioritize inspections in areas of the state where there is a history of illegal disposal activity. Similarly, EPA Region 3 officials told us that they do not set annual inspection goals by inspection type, but prioritize inspections based on factors such as danger to the general public, emergency response, and the availability of inspection staff. Generally, according to state officials, state inspection staff in the five state programs we reviewed are responsible for inspecting both production and class II injection wells in the state. According to state officials, staff may inspect only class II or production wells or may inspect both. For example, of the 47 staff members conducting inspections in Ohio, 4 conduct inspections on class II wells full-time and 8 to 10 split responsibilities between production and class II wells. Similarly, inspection staff in EPA regions we reviewed are responsible for inspecting all classes of injection wells managed by the region. For example, according to EPA Region 4 officials, Region 4 has approximately 3 program staff members and 2 contractors to conduct inspections of all classes of injection wells in the region. State agencies and EPA regional offices responsible for managing or overseeing programs in the seven states we selected vary in the inventory of wells they manage and the staffing resources dedicated to inspect those wells. For example, according to North Dakota officials, North Dakota has 35 staff members to inspect 14,158 production and class II injection wells in the state. According to Oklahoma officials, Oklahoma has 62 staff members to inspect the state’s almost 190,000 production and class II wells. EPA regions managing programs in states we selected have comparatively fewer inspection staff to inspect the injection wells they manage. For example, according to EPA officials, Region 3 has 1 full-time inspector and 3 part time inspectors to inspect the almost 29,000 injection wells it manages region-wide, including over 1,800 class II wells in Pennsylvania. This appendix contains information on the enforcement process used by selected state programs and programs managed by the Environmental Protection Agency (EPA), the enforcement tools available to programs, and our analysis of a sample of enforcement cases that we reviewed. We selected a nongeneralizable sample of at least six enforcement cases in each of the seven states in our sample, based on the significance of the violation, the type of enforcement action taken, and the number of days between when the operator was notified and when the violation was resolved—defined as returning to compliance—with applicable requirements. According to EPA’s Underground Injection Control Program Compliance Strategy for Primacy and Direct Implementation Jurisdictions (Strategy), state and EPA-managed programs should escalate their enforcement response if compliance is not achieved in a timely manner. The enforcement action taken can depend on a number of factors, including the severity of the violation and its potential to contaminate underground sources of drinking water. Our analysis of enforcement actions taken by select state and EPA-managed programs found that the programs have generally similar enforcement processes to respond to class II Underground Injection Control (UIC) violations. A violation can be discovered a number of ways, including through an inspection, administrative review of a well file, or reports by citizens or others. According to EPA and state officials, the enforcement process generally begins when program officials notify a well operator that the well is in violation of applicable requirements. Six of the seven programs we reviewed generally issue a written notice of the violation to the well operator, but North Dakota program officials told us that they instead give a verbal notification and then, depending on the severity of the violation, will allow a 30-day grace period before initiating a formal enforcement action. For the state and EPA-managed programs we reviewed, notices of violation can include one or more violations in a single notice. If the operator does not take action to resolve a significant violation—that is, return the well to compliance with all state and federal regulations—in a timely manner, EPA’s 1987 Strategy directs state and EPA-managed programs to take formal enforcement actions to ensure that compliance is achieved. Formal enforcement actions can include the following: Administrative orders. Administrative orders are legally enforceable orders, the terms of which can either be dictated by the program or negotiated with the well operator in violation (which may be referred to as a consent order or consent agreement). Administrative orders may enjoin the well operator from taking certain actions, may require the well operator to take corrective action, and may impose monetary penalties. Civil judicial action. Civil judicial actions are lawsuits filed against an operator that has failed to comply with, for example, statutory or regulatory requirements or an administrative order. Civil actions are generally taken when administrative enforcement actions have been unsuccessful in achieving compliance and resolving the violation, according to EPA officials. Criminal judicial action. A program may also refer a case to the criminal justice system if an action is willfully committed. A criminal court conviction can result in fines or imprisonment. State and EPA-managed programs have various tools available to facilitate a return to compliance with applicable requirements and deter future violations. According to EPA officials, state and EPA-managed programs can vary in their approaches to enforcing UIC program requirements as long as the programs are effective at protecting underground sources of drinking water. Six of the seven state and EPA- managed programs in our review have authority to assess monetary penalties. Table 6 details the types of administrative, civil, and criminal monetary penalty authority available at the state and federal levels for the selected states in our review. While six of the seven programs we reviewed have the legal authority to assess monetary penalties, some do not regularly use these authorities for various reasons. For example, North Dakota’s program has administrative authority to assess a monetary penalty, but the state prefers to employ a more cooperative approach to get operators to bring wells back into compliance, according to North Dakota officials. According to Colorado officials, Colorado’s program has also historically employed a cooperative approach, but the state recently revised its regulations to require a more prescriptive approach to enforcement. Ohio program officials told us that they do not have an administrative process for assessing a monetary penalty, and the penalty must instead be pursued through the civil judicial process. In turn, Ohio officials told us that they consider the advantages and disadvantages of resolving a violation through negotiated consent agreement before referring the case to the state’s attorney general to pursue civil penalties. Other tools available to selected state and EPA-managed programs to enforce program requirements may include the following: Well shut-in. Some programs we reviewed may temporarily close down a well until a violation is resolved. For example, Oklahoma officials can shut-in a well if an operator is out of compliance with its financial assurance requirements. Pipeline severance. A program may also have the authority to sever an operator’s access to oil and gas pipelines. For example, if an operator uses a well that has been shut-in for violations, Texas may take the further step of refusing to renew certain documents the operator needs to do business in the state. If the disposal well operator also has production wells in the state, this would prevent the operator from producing oil and gas. According to EPA officials, this can be an effective enforcement tool given that a company’s income is generated on the production side. Permit revocation or temporary suspension. A program may have the authority to revoke or temporarily suspend existing UIC permits, thereby making it illegal for an operator to continue injecting into a well or group of wells covered under the permit. For example, North Dakota may revoke permits after notice and hearing if the well operator fails to comply with the terms and conditions of its permit or any applicable rule or law, and the state may suspend permits for good cause. Moratorium on new or renewed permits. A program may be able to refuse to issue new permits to an operator with a history of noncompliance. For example, Oklahoma program officials can seek an order denying a permit to an operator with an unsatisfactory compliance history. Bond forfeiture. A program may require well operators to post a bond to ensure compliance with requirements applicable to the well. If an operator fails to comply with these terms, a state may be able to seize the bond to cover the costs of returning the well to compliance. For example, if Ohio officials find that an operator has failed to comply with, among other things, certain orders, regulations, or its permit, it may declare the operator’s bond to be forfeit. Under the Safe Drinking Water Act, EPA must enforce state requirements if violations have not been enforced by states in a timely and appropriate manner. EPA’s Strategy sets forth standards for timely and appropriate enforcement action in response to significant violations. Specifically, state and EPA-managed programs are expected to resolve significant violations within 90 days of discovering the violation or take a formal enforcement action against the well operator. According to the Strategy, a formal enforcement action, among other things, is legally enforceable, explicitly requires the well owner to take corrective action, and specifies a timetable for completion. When EPA becomes aware that an operator is violating a state program requirement, a provision in the act requires EPA to notify the state and, if the state does not take appropriate action within 30 days, to intervene by issuing an administrative order or commencing a civil action. Similarly, EPA regions should take timely and appropriate enforcement actions in states with EPA-managed programs. State and EPA-managed programs are required to submit periodic reports to EPA headquarters with information on enforcement actions taken against well operators. One of the required reports is to provide quarterly information on individual significant violations by well operators that have not been resolved and that may have the potential to threaten underground sources of drinking water. EPA uses the 7520-4 form to collect this information. According to EPA’s 1987 Reporting Requirements—Underground Injection Control Program Guidance (Program Reporting), EPA uses the 7520-4 forms to evaluate the timeliness and appropriateness of a state or EPA-managed program’s enforcement response. Our analysis of a sample of significant violations from selected state and EPA-managed programs found that a subset of significant violations that should have been reported on the 7520-4 forms were not reported, and that the forms contained incomplete and inconsistent information. Specifically, we sampled 134 notices of violation from selected state and EPA-managed programs, of which 93 included significant violations (see app. VI for a list of the enforcement cases we reviewed in the seven state and EPA-managed programs). Table 7 shows the number and types of violation notices we assessed from our sample of 134 notices of violation, for fiscal years 2008 through 2013, for each state and EPA-managed program in our review. To establish which of those 93 violations should have been reported on the 7520-4 form, we used EPA’s 1987 Strategy and 1986 Program Reporting guidance, which call for state and EPA-managed programs to report information on significant violations that were not resolved within 90 days of discovery and also did not have a formal enforcement action taken against the well operator. To determine the 90-day allowable time frame, we calculated the number of days between the date the operator was notified of the violation and the date a formal enforcement action was taken, and found that 29 significant violations had gone longer than 90 days without formal enforcement action and should have been reported on the 7520-4 form (see table 8). We then compared the results of our calculation to 7520-4 forms we obtained from EPA for fiscal years 2008 through 2013, and found that 7 of the 29 were reported by the respective program. Legend: EPA = Environmental Protection Agency; CO = Colorado; KY = Kentucky; ND = North Dakota; OH = Ohio; OK = Oklahoma; PA = Pennsylvania; TX = Texas; N/A = not applicable, Admin = administrative order; Consent = consent agreement; Civil = civil judicial action; Criminal = criminal judicial action. In addition to the contact named above, Susan Iott (Assistant Director), Mark Braza, Antoinette Capaccio, John Delicath, John Hocker, Rich Johnson, Micah McMillan, Maria Stattel, Kiki Theodoropoulos and Breanna Trexler made key contributions to this report. | Since the early 2000s, increased oil and gas production has resulted in an increase in wastewater that must be managed properly. The majority of wastewater from oil and gas production is injected into underground wells known as class II wells. These wells are regulated to protect drinking water sources under EPA's UIC class II well program and approved state class II programs. EPA oversees state programs, and EPA regions manage programs in states without approval. GAO was asked to review EPA's oversight of programs' inspection and enforcement information and activities. This report examines the extent to which EPA has collected inspection and enforcement information and conducted oversight activities needed to assess that class II programs protect underground sources of drinking water. GAO reviewed federal and state laws and regulations and EPA guidance and analyzed a nongeneralizable sample of significant violations. GAO interviewed EPA and state officials from programs in a nongeneralizable sample of eight states selected based on shale oil and gas regions, among other factors. The Environmental Protection Agency (EPA) has not collected specific inspection and complete or consistent enforcement information, or consistently conducted oversight activities, to assess whether state and EPA-managed Underground Injection Control (UIC) class II programs are protecting underground sources of drinking water. EPA guidance calls for states and EPA regions to report certain information and for EPA to assess whether programs are effectively protecting underground sources of drinking water, but the agency does not. Specifically: EPA annually collects summary data from state and EPA-managed programs on the types of inspections they conduct. However, these data are not specific enough to determine the number of different types of inspections that states and EPA regions are to conduct to meet their annual goals. Such goals are specified at the well level (e.g., to inspect 100 percent of wells associated with emergency responses). Under federal internal control standards, managers are to compare actual performance to planned or expected results and analyze significant differences. Without well-specific data on inspections, EPA cannot assess whether state and EPA-managed programs are meeting annual inspection goals. EPA collects information on unresolved significant violations of state and EPA-managed programs to determine if the agency needs to take action to enforce applicable program requirements. However, GAO's analysis of a nongeneralizable sample of 93 significant violations for fiscal years 2008 through 2013 found that state and EPA-managed programs did not report data on such violations completely or consistently. For example, of 29 such violations that had not been enforced after 90 days as required, programs reported 7 to EPA. According to EPA and state officials, the cause was inconsistent interpretations of EPA's reporting guidance. EPA officials said they are aware that the data reported on such violations are not complete or consistent, but the agency has not clarified in guidance what data programs should report. Until it does so, EPA does not have reasonable assurance that it has the data needed to assess if it must take enforcement action. EPA has not consistently conducted oversight activities necessary to assess whether state and EPA-managed programs are protecting underground sources of drinking water. For example, GAO found in June 2014 that EPA does not consistently conduct oversight activities, such as annual on-site program evaluations. According to EPA guidance, such evaluations should include a review of permitting and inspection files or activities to assess whether the state is protecting underground water. In California, for example, EPA did not regularly review permitting, and in July 2014, after a state review of permitting, EPA determined that the program was out of compliance with state and EPA requirements. EPA officials said that they have few resources to oversee UIC class II programs, but EPA has not conducted a workforce analysis consistent with GAO's work on strategic human capital management to identify the resources needed for such oversight. Without conducting such an analysis, EPA will not be able identify the human capital or other resources needed to carry out oversight of the UIC class II programs to help ensure that they protect underground sources of drinking water. GAO recommends that, among other things, EPA require programs to report well-specific inspections data, clarify guidance on enforcement data reporting, and analyze the resources needed to oversee programs. EPA generally agreed with GAO's findings, but does not plan to require well-specific data and analyze needed resources. GAO continues to believe that EPA should take both actions to better assess if programs protect underground sources of drinking water. |
Congress has delegated to Treasury the power to borrow the money needed to operate the federal government and manage the government’s outstanding debt subject to a statutory limit. Treasury’s primary debt management goal is to finance the government’s borrowing needs at the lowest cost over time. To meet this objective, Treasury issues debt through auctions in a “regular and predictable” pattern across a wide range of securities. Treasury does not “time the market”—or take advantage of lower interest rates—when it issues securities. According to Treasury, because investors and dealers rely upon the routine availability of Treasury securities they tend to pay a slight premium, which lowers Treasury’s borrowing costs. In addition, Treasury also states that to support liquidity, it must issue “enough but not too much” at each auction. If Treasury issued too little, it could not sustain a deep and liquid secondary market for its securities. If it issued too much, Treasury creates concern among primary market participants that they may find it difficult to distribute their holdings in the secondary market. Treasury publishes a schedule with tentative announcement, auction, and settlement (issue) dates up to 6 months in advance of regular security auctions. Depending on the type of security, Treasury typically auctions and then issues a security within a week or less. Treasury generally issues short-term regular bills with 4-, 13-, and 26-week maturities every Thursday and issues 2- and 5-year notes at the end of each month. Three- and 10- year notes are issued in the middle of each quarter and Treasury reopens 10-year notes 1 month after their initial issuance. In addition, Treasury issues TIPS in 5-, 10-, and 20-year maturities in certain months according to the TIPS’ maturity. Finally, Treasury issues 30-year bonds in the middle of February and reopens the bonds in the middle of August. Figure 1 depicts Treasury’s April 2005 borrowing schedule. Treasury supplements its regular and predictable schedule with flexible securities called CM bills. Unlike other securities, Treasury does not publish information on CM bills on its auction schedule. Instead, Treasury generally announces CM bill auctions anywhere from 1 to 4 days ahead of the auction. The term to maturity—or length of time the bill is outstanding—varies according to Treasury’s cash needs. CM bills allow Treasury to finance very short-term cash needs—for as little as 1 day— while providing short notice to market participants. The United States Treasury is not alone in using CM bill-type instruments to finance short-term needs and smooth cash flows. For example, in the United Kingdom, the Debt Management Office (DMO) issues CM bills to meet temporary cash flow needs that the DMO cannot conveniently meet through its structured bill auctions and to help smooth cash flows. The Bank of Canada, which auctions securities for Canada’s debt management, issues CM bills to help it minimize the level and cost of carrying cash balances. Australia uses short-term securities with 5-, 13-, and 26-week maturities to bridge within-year cash flow mismatches, but may vary the maturities and issue shorter-term securities of 3 to 6 weeks. To understand when Treasury uses CM bills, we analyzed CM bills issued over the last 10 fiscal years—fiscal years 1996–2005. This time period provides a sufficiently large sample of CM bills—121—and allowed us to analyze and show trends in CM bill use before, during, and after the following events: 4 years of surpluses, five debt issuance suspension periods (DISP) declared by the Secretary of the Treasury, and the introduction of two new debt instruments—TIPS in 1997 and the 4-week bill in 2001. CM bill data were obtained from the Bureau of the Public Debt (BPD). BPD’s online database has 22 different features for each Treasury security, including the amount, announcement, auction and issue dates, the maturity date, and the yield. We examined these features as well as others examined in earlier studies of CM bills to identify patterns of CM bill auctions, issuance, and maturity. To understand why CM bills are issued or mature at these times, we examined Treasury’s cash flows from fiscal years 1996–2005 using publicly available data from the Financial Management Service’s (FMS) Daily Treasury Statements. We also met with Treasury officials from the Office of Debt Management and the Office of Fiscal Projections. We reviewed Treasury documents, including quarterly refunding statements and policy statements, and Treasury Borrowing Advisory Committee reports and minutes from quarterly refunding meetings. We obtained views on the perceived advantages and disadvantages of CM bills for Treasury, Federal Reserve operations, and investors in meetings with Treasury officials, Federal Reserve officials, market participants, including primary dealers and money market fund managers, and market analysts. We also reviewed financial and economic literature. We did not identify any key advantages or disadvantages of CM bills for Federal Reserve operations and as a result these operations are not a focus in this report. To describe the key disadvantage of CM bills for Treasury—higher yields— we estimated the differential between CM bill yields and the yields on outstanding Treasury bills of similar maturity at the time of auction using data from BPD and the Wall Street Journal (WSJ). To determine whether certain features might reduce the yield paid on CM bills, we regressed the yield differential on key features. We determined which features to examine on the basis of previous studies of CM bills and Treasury auctions, our interviews with Treasury and market participants, and our own analysis. While we analyzed the yield differential for all CM bills issued during fiscal years 1996–2005, we focused on the 55 CM bills issued during fiscal years 2002–2005 because the introduction of the 4-week bill in 2001 substantially reduced CM bill maturities and caused a structural change in the CM bill market. For more information on our statistical analysis, see appendix I. Because CM bill rates are determined in auctions, we examined auction performance to determine whether there is any relationship between selected performance measures and the yield differential. We also examined whether CM bill auctions perform as well as regular 4-week bill auctions because the 4-week bill is more similar to a CM bill—in terms of issuance amount variance and its term to maturity—than other Treasury securities. We evaluated each of the 284 CM and 4-week bill auctions in fiscal years 2001–2005 (years since the introduction of the 4-week bill) using proprietary data from the Treasury Auction Database (TAD) and GovPX—an interdealer broker database with information on primary dealer transactions for all U.S. Treasury securities. We analyzed auction performance measures mentioned in Treasury interviews, Treasury studies of auctions, Borrowing Advisory Committee minutes, and economic and financial literature/textbooks describing or evaluating Treasury auction performance. See appendix II for more information on our analysis of CM bill auctions. We identified possible options to reduce the use and cost of CM bills on the basis of our analysis of CM bill use, the yield differential, and CM bill auction performance. We discussed these options with Treasury, market participants, and others and we include their comments as appropriate. In order to assess the reliability of data used in this study, including proprietary data from TAD and GovPX, and the publicly available data from FMS, BPD, and WSJ, we examined the data to look for outliers and anomalies and addressed such issues as appropriate. In general, we chose databases that were used by Treasury and researchers to examine Treasury markets and auction performance. Where possible and appropriate, we corroborated the results of our data analysis with other sources. On the basis of our assessment we believe the data are reliable for the purposes of this review. We conducted our review in Washington, D.C., from February 2005 through March 2006 in accordance with generally accepted government auditing standards. Treasury frequently faces a cash financing gap of about 2-weeks duration because of timing differences of large cash inflows and outflows. Treasury makes large regular payments (e.g., Social Security and federal retirement) in the beginning of the month and it often receives large cash inflows in the middle of the month from income tax payments and note issuances. Because regular bills alone are not sufficient to fill this cash financing gap, Treasury has increasingly used CM bills since 2002. Treasury has also relied on CM bills when nearing the debt ceiling to help pay its bills while keeping debt under the statutory limit. Treasury’s largest cash outflows generally occur in the beginning of the month. For example, in fiscal year 2005, almost one-quarter of the government’s annual fiscal cash outlays (withdrawals excluding debt redemption) were paid in the first 3 days of each month. On or around the 1st of every month Treasury paid about $14 billion to active duty military personnel, military and civilian retirees, and others. In the beginning of some months, Treasury also paid up to $6 billion for Medicare. On or around the 3rd of every month it paid about $21 billion in Social Security benefits. In total, Treasury made $718 billion cash payments in the first 3 days of months in fiscal year 2005. These large payments in the beginning of the month are not anticipated to decline soon. A Treasury official explained that Social Security benefits paid on the 3rd of the month are anticipated to remain relatively steady for a number of years and then decline because of steps taken by the Social Security Administration (SSA) in 1997 that have helped smooth cash payments out of Treasury. (Fig. 2 describes these steps in more detail.) However, because beneficiaries receiving benefits before 1997 continue to receive their benefits at the beginning of the month, we estimate the large payments could last another 10 years. Other federal benefit programs continue to pay all or most of their benefits at the beginning of the month. If these payments were, like Social Security, spread throughout the month, it would help smooth cash flows and might reduce Treasury’s need to use unscheduled large CM bills at the beginning of the month. In contrast to outlays, Treasury’s largest cash inflows generally occur in the middle of the month. Although the majority of federal tax receipts are already fairly smoothed throughout the year, most corporate and nonwithheld individual tax payments are made in the middle of certain months throughout the year. Treasury receives large corporate tax payments on (or around) the 15th of March, April, June, September, and December. During fiscal year 2005, Treasury received just over $213 billion of cash on these days. Treasury also had large receipts in the middle of January, April, June, and September from nonwithheld individual estimated tax payments, and after April 15 from the settlement of prior year individual income tax liability. In addition to tax receipts, net cash raised from note issuances (i.e., issuance less redemption) on or around the 15th of the month more than tripled from $78 billion in fiscal year 2002 to $297 billion in fiscal year 2005. However, some large midmonth cash inflows from note issuance will not endure because Treasury shifted 5-year note auctions from the middle to the end of each month beginning in February 2006. As a result, cash inflows from new 5-year debt issuance shifted from the middle to the end of the month. To achieve the lowest borrowing costs over time, Treasury seeks to provide the market with a high degree of stability in the amount issued of each security, with the requisite stability increasing with issuance maturity. Treasury officials said they try to limit swings in regular bill offerings. This means regular bill issuances cannot suddenly increase by the amount needed to make large payments in the beginning of the month nor suddenly decrease in the middle of the month to absorb large inflows. In practice, Treasury only varied the amount of 13- and 26-week bills by $2 billion (less than 10 percent of the average amount issued) from week to week in fiscal year 2005. Since Treasury introduced the 4-week bill in 2001 to help reduce cash balance swings, its issuance size varies more than that of other regular Treasury bills. In fiscal year 2005, the size of the 4-week bill varied by as much as $13 billion (more than three-quarters of the average amount issued). Even so, to meet cash needs—which in fiscal year 2005 averaged almost $60 billion at the beginning of the month—Treasury has come to rely on CM bills. The combination of low cash balances in the beginning of the month with large cash inflows in the middle of the month has led to a general pattern of CM bill issuance. Our analysis shows that from 2002 to 2005, on average about half of CM bills were issued in the first 3 days of the month. According to Treasury officials, Treasury decides to issue CM bills when it has low cash balances, which often track the timing of large payments. In fiscal year 2005, for example, Treasury issued 11 of its 21 CM bills in the first 3 days of the month when it had below average cash balances (the average cash balance for fiscal year 2005 was $25.5 billion) (see fig. 3). The amounts issued varied from $4 billion to $42 billion. The maturity dates of CM bills have varied over the 10-year period we examined, but since 2002 CM bills have increasingly matured on the 15th of the month when Treasury receives large cash inflows. In fiscal year 2005, Treasury set 68 percent of its $268 billion CM bill borrowings to mature in the middle of March, April, June, September, and December when Treasury receives large corporate and individual income taxes. In these 5 months Treasury received 72 percent ($213.3 billion) of fiscal year 2005 corporate income tax deposits. With only one exception, all CM bills issued in fiscal year 2005, regardless of when Treasury issued them, matured on or around the 15th of the month (see fig. 4). Since 2002, Treasury has increasingly filled the approximately 2-week financing gap by issuing CM bills at the beginning of the month and setting the maturity date of these CM bills for the middle of month. Table 1 shows that since 2002, 23 CM bills—or more than 40 percent of the number of CM bills issued—have been issued on the 1st–3rd days of the month and matured on the 15th day of the month. These CM bills accounted for more than half of the total dollar amount of CM bills issued in the last 4 fiscal years. Over the last 10 years Treasury has relied on multiple CM bills to help manage cash flows in April. The large cash balance swings in April could not be accommodated by changes in the regular bill issuance schedule according to Treasury officials and our analysis. To help smooth large April cash flows, Treasury consistently issued three or more CM bills every year between fiscal years 1996 and 2005 that matured in the later half of April (on the 15th or later), when Treasury typically received large tax receipts. For example, in fiscal year 2005 Treasury issued three CM bills—two matured on April 15 and one matured on April 18—totaling $47 billion and received about $41 billion in corporate and nonwithheld income and employment tax payments on those dates. Better aligning or smoothing cash inflows and outflows could reduce or eliminate the frequent financing gap thereby reducing the need for some CM bills but Treasury does not have authority to control the timing of all cash flows. Treasury is a passive agent; it collects and disburses federal funds at agencies’ request. It does not determine when major benefit payments are made. For example, the payment dates of civil service and railroad retirement are set by law. Due dates for tax payments are also set by federal statute. Treasury does have some control over debt-related cash flows and has considered how changes in its schedule affect cash flows. For example, when making changes to its schedule in 2001, Treasury maintained monthly issuance of 2-year notes, which are issued at the end of the month and could help pay regular benefit payments in the beginning of the following month. More recently, Treasury moved 5-year note issuance from the middle to the end of the month. This change more closely aligns 5-year note issuance with the beginning of the next month’s payments and may reduce CM bill issuance in the early part of months. However, regular securities can only help fill cash financing gaps temporarily. No regular security that matures on the same day another is issued can be used to fill cash financing gaps over the long run because, in a steady state, most, if not all, of the cash raised at each issuance would be needed to pay maturing securities. As a result, Treasury would not raise large amounts of new cash. In addition, Treasury generally sets the issue amount for longer-term securities to cover long-term deficit needs, not short-term cash shortfalls, and thus the new cash raised with the 5-year note may not cover the full amount of cash needs in the beginning of the month. Although Treasury will likely raise new cash at the end of the month with the 5-year note until 2011 when 5-year notes issued in 2006 begin to mature, the change in the auction schedule is not likely to eliminate the need for CM bills. When debt is nearing the statutory limit, Treasury has to take a number of extraordinary steps to meet the government’s obligation to pay its bills while keeping debt under the ceiling. Treasury also issues CM bills, among other actions, to accomplish these goals. CM bills, like other Treasury securities, are subject to the debt limit. However, CM bills allow Treasury to borrow cash for shorter time periods than regular bills. On occasion, Treasury has changed scheduled auctions for its regular securities and instead issued CM bills so that the debt ceiling will not be reached. For example, because of debt limit constraints Treasury delayed the 4-week bill auction scheduled for Tuesday, November 16, 2004. Treasury then auctioned a 5-day CM bill for $7 billion on November 17, 2004. In another example, Treasury said that inaction on the debt ceiling in 2002 led to reduced issuance of 4-week bills and larger, more frequent CM bill issuance than it would have done otherwise. During the five DISPs within the last decade, Treasury issued 19 CM bills totaling $300 billion. As table 2 shows, Treasury issued most of these CM bills during the lengthy DISPs in fiscal years 1996 and 2003. CM bills will continue to be a useful tool for Treasury when it approaches the debt ceiling, but, as discussed on page 20 of this report, Treasury pays a premium for the flexibility CM bills provide. Although CM bills offer a means to raise cash in as little as a day, CM bills, like other Treasury securities, cannot be used when financial markets are closed or not functioning properly. Previously, Treasury could obtain cash on short notice outside financial markets, but this capability ceased more than twenty years ago. Treasury was also able to draw on compensating balances when financial markets closed after September 11, 2001, but these balances were terminated in 2004. In the past, Treasury had access to a cash draw authority. Intermittently between 1942 and 1981, Treasury was able to directly sell (and purchase) certain short-term obligations from the Federal Reserve in exchange for cash. Treasury used the cash draw authority infrequently and mostly in times of war or armed conflict. The Federal Reserve held special short- term certificates purchased directly from Treasury on 228 days between 1942 and 1981. In the years Treasury used this authority, it borrowed on about 11 days on average per year. The Treasury Draw Policy, as amended in 1979, stated that Treasury could use the cash draw authority only in “unusual and exigent circumstances.” Congress allowed this authority to expire in 1981. Prior to March 2004 Treasury could use compensating balances— noninterest bearing cash balances that were used to compensate banks for various services—as a source of short-term funding when markets were closed or during DISPs. Treasury officials said that compensating balances were not viewed as a substitute backup facility for Treasury to obtain cash in the short term and were only used in extraordinary circumstances. For example, on September 11, 2001, Treasury had to cancel the auction of 4-week bills, which would have settled on Thursday, September 13, 2001. Because of the auction cancellation, Treasury lacked sufficient cash to pay about $11 billion of maturing 4-week bills on Thursday, September 13. Treasury obtained sufficient cash by drawing down compensating cash balances. However, in March 2004, compensating balances were replaced with direct payments to the banks. More recently, in the aftermath of Hurricane Katrina, when cash balances fell more than expected, Treasury obtained cash by canceling a planned cash investment. Treasury periodically auctions excess cash to banks through its Term Investment Option (TIO) program. Treasury invests cash through TIOs for a fixed term at a rate determined through a competitive auction process. Treasury intended to award a $5.5 billion investment option on Friday, November 25, 2006. However, additional spending in response to Hurricane Katrina caused Treasury’s cash balance to fall to an unexpected low level on November 23, 2005. In response, Treasury did not follow through on the planned investment option. Using these investments as a source of cash in the future requires that Treasury actually have excess cash to invest and that it knows it will need the cash before investment. Reliance on excess cash is limited since Treasury has placed increased emphasis on minimizing cash balances in order to reduce overall borrowing costs. One possible consequence of this practice is an increased risk that incorrect cash flow predictions or emergencies could lead to Treasury overdrawing its Federal Reserve Bank account. As a result, an important issue for future consideration is how Treasury might obtain funds to finance government operations should normal financial market operations be degraded significantly or closed because of a catastrophic emergency. CM bills provide Treasury with flexibility to obtain cash outside its regular borrowing schedule, but Treasury generally paid a higher yield on CM bills than outstanding bills of similar maturity paid in the secondary market. The differential between CM bill yields and similar maturing outstanding bills—hereafter called the “yield differential”—varied greatly from fiscal year 1996 through 2005. We found several factors, both within and outside Treasury’s control that affected the yield differential. Despite their higher yield, CM bills are generally less costly than maintaining higher cash balances or issuing 4-week bills as means to obtain the cash needed to make large payments, such as Social Security and federal retirement, at the beginning of the month. CM bill yields closely track the level of short-term interest rates prevailing at the time of auction. This is because investors see other short-term instruments as close substitutes for CM bills. They can either buy new bills or buy existing bills in the secondary market. In general, investors will not accept a lower yield (bid a higher price) for a new bill than that available on an existing bill. Conversely, investors would not offer a lower price to obtain a higher yield than prevailing in the market since they would likely be underbid. As a result, CM bill yields follow other short-term yields within a narrow range. Short-term yields change over time in response to changes in economic activity, the demand for credit, investors’ expectations, and monetary policy as set by the Federal Reserve. CM bill yields declined from about 5.8 percent in 2000 to 1.1 percent in 2004. This decline reflected the overall reduction in short-term rates driven largely by the Federal Reserve’s monetary actions and other market forces. The Federal Reserve started lowering the federal funds rate—the interest rate at which banks lend reserves to other banks overnight—in early 2001, and by 2002 the federal funds rate was at levels not seen since the early 1960s. Beginning in the summer of 2004, the Federal Reserve began to increase the federal funds rate. At the same time, CM bill yields increased to an average 2.5 percent in fiscal year 2005 (see fig. 5). Treasury paid a higher yield on most CM bills issued during our study period than outstanding bills of similar maturity paid in the secondary market. The average yield differential fell from 47 basis points in fiscal year 2001 to 5 basis points in fiscal year 2004 (see fig. 6). In fiscal year 2005, yield differentials grew and CM bill yields were about 14 basis points higher on average than outstanding bills of similar maturity. Our analysis identified two important factors behind the yield differential decline: lower short-term Treasury yields and reduced CM bill issuance. The first effect was somewhat temporary while the latter effect could last. The level of short-term interest rates is largely driven by Federal Reserve policy and market forces rather than by Treasury. Treasury bill yields have risen and may continue to rise and eventually reach levels that prevailed in the late 1990s, thereby erasing the portion of the decline in the yield differential caused by lower interest rates. However, because the 4-week bill is now a permanent feature of Treasury’s auction schedule and has reduced Treasury’s reliance on CM bills, the portion of the decline in the yield differential attributable to relatively lower CM bill issuance is likely to endure. The experience during fiscal years 2002–2005 as a whole suggests that the yield differential could remain about 13 basis points below pre- 2002 levels. These findings are discussed in more detail later in this report. The large reduction in the yield differential has helped reduce borrowing costs associated with CM bills. The daily cost associated with the 47-basis- point yield differential in fiscal year 2001 was about $12,900 per $1 billion. In fiscal year 2001, Treasury borrowed $19.2 billion (annualized amount outstanding) using CM bills. Of the $1.06 billion in total borrowing costs associated with CM bills in that year, $70 million was associated with the yield differential. Since then, the average yield differential has declined and was 14 basis points, or about $3,800 a day per $1 billion borrowed, in fiscal year 2005. During fiscal year 2005, Treasury borrowed about $8 billion (annualized amount outstanding) using CM bills. Total borrowing costs were $215 million and the borrowing cost associated with the yield differential was about $12.8 million. Treasury could achieve savings by further reductions in the yield differential. CM bills may have higher yields because, according to Treasury officials and market participants, they are bought for a different purpose than regular bills. According to market participants, some money market funds and foreign central banks purchase CM bills but for the most part they are not widely used as an investment tool because of their irregularity and short-term nature. Instead, CM bills are primarily used by primary dealers as collateral for repurchase agreements. A repurchase agreement is a form of short-term collateralized borrowing used by dealers in government securities. Figure 7 describes repurchase agreements in more detail. We found the yields of CM bills to be near the yields of overnight repurchase agreements. In fiscal year 2005, CM bill yields were within 2 basis points of overnight repurchase agreement yields. Although repurchase agreements and CM bills are both short-term investments, there are some differences. For example, repurchase agreements are subject to federal, state, and local taxes whereas CM bills are exempt from state and local taxation. Also, CM bills, like regular Treasury bills, are risk free whereas repurchase agreements issued by private borrowers involve some risk. The high-quality collateral in a repurchase agreement (e.g., Treasury securities or agency securities) reduces the credit risk faced by the lender and allows borrowers to obtain cash at a lower cost than they would obtain otherwise. However, the lender is still exposed to credit risk because if the borrower fails to repay the loan, the market value of the collateral may be less than the amount owed. Repurchase agreements are structured carefully to reduce this credit risk exposure, for example, by lending less than market value of the security used as collateral. Nevertheless, repurchase agreements have more risk than securities issued by Treasury and accordingly have higher yields than Treasury securities of similar maturity, which are risk free and are the floor for short-term rates in the money market. Our statistical analysis found that, all other things equal, the amount of CM bills issued was positively correlated with the yield differential, meaning that increases in the amount of CM bills issued relative to the amount of similar-maturing Treasury bills outstanding increased the yield differential, and decreases in the amount issued reduced the yield differential. We also found that CM bills that were part of a “multiple tranche”— successively issued CM bills maturing on the same day—tended to have higher yield differentials, and CM bills maturing around large tax payment dates tended to have lower yield differentials. Changes in the general level of short-term yields also affected the yield differential. While the amount and timing of CM bills is somewhat under Treasury’s control, the general level of short-term yields in the economy is not. Although CM bill yields are determined in auctions, we did not find a statistically significant relationship between the auction performance measures we examined—such as participation, distribution of auction awards, and preauction activity in the when-issued market—and the yield differential. However, we found that in more concentrated auctions Treasury was less likely to pay more than the yield indicated in the when- issued market at the time of auction, which is a positive auction result. While an increase in the relative amount of CM bills might enhance liquidity and thereby reduce their yields, investors might require a higher yield to acquire relatively large amounts of CM bills. Our statistical analysis found that increases in the amount of CM bills issued relative to outstanding bills with similar maturity increased the yield differential. For example, if Treasury had increased its fiscal year 2005 CM bill issuance by $10 billion, the average yield differential would have been about 4.2 basis points higher according to our analysis. Conversely, reducing the amount of CM bills issued tended to reduce the yield differential. Earlier research by others had similar findings. According to our analysis, the reduced use of CM bills since the 4-week bill was introduced contributed to the overall decline in the yield differential. From fiscal years 1996–2001, the amount of CM bills issued was almost 70 percent of outstanding bills with similar maturity. In fiscal year 2002, the share declined to 30 percent. At the same time, the yield differential declined from about 35 basis points (fiscal year 1996–2001 average) to 4 basis points in 2002. According to our analysis, about 35 percent of the yield differential decline can be attributed to Treasury’s reduced use of CM bills. CM bill issuance has remained low relative to outstanding bills with similar maturity since 2002. These results suggest that the 4-week bill reduced both the use and cost of CM bills as Treasury intended. We also found that consecutive CM bills maturing on the same day—called “multiple tranches”—tended to increase the yield differential. In recent years, Treasury has increasingly issued consecutive, small CM bills with the same maturity date—rather than one larger CM bill—which reduces the average amount outstanding and ultimately reduces borrowing costs. In 2005, three-quarters of the CM bills issued were part of a multiple tranche. Our analysis suggests that issuing consecutive CM bills that mature on the same date might increase the yield differential by 6 basis points over a single CM bill. Market participants suggested if they suspect Treasury might reopen the CM bill later, they may bid less aggressively on a CM bill, which would result in lower prices and higher yields for CM bills. The evidence that issuing in multiple tranches increases the yield differential should be viewed with caution. The estimated coefficient is only significant at the 10 percent level and is sensitive to the inclusion of other explanatory variables. For example, with the inclusion of a variable representing the number of days advance notice before auction, the coefficient of multiple tranche CM bills is no longer significant even at the 10 percent level. Moreover, the lower cost of multiple tranches resulting from smaller average issues and shorter term to maturities most likely offsets any increase in the yield. For more information on the cost-saving effects of multiple tranches, see pages 38–39. Increasingly Treasury has issued CM bills that mature on individual and corporate tax payment dates and, according to our analysis, this practice may be leading to lower yields and borrowing costs. Our statistical analysis shows the yield differential on CM bills maturing on or near April 15 or on other tax payment dates (generally the 15th of March, June, September, and December) is 12 basis points lower than other CM bills. This may be explained by empirical evidence suggesting that regular Treasury bills whose maturity dates immediately precede corporate tax payment dates have special value because corporate treasurers may wish to invest excess cash in securities whose cash flows can be used to liquidate cash liabilities. Almost 50 percent (27 of 55) of CM bills Treasury issued over the last 4 fiscal years matured on large tax payment dates. In the last 2 fiscal years, Treasury set at least one CM bill to mature on each of the large tax payment dates. So, to a large extent, Treasury has already captured the cost-savings from this feature. It is also important to note, however, that the 12-basis point difference is unlikely to offset even an extra day of borrowing. CM bills maturing on dates other than large tax payment dates with shorter maturities are likely to cost less than CM bills maturing on a corporate tax payment date with longer maturities because debt is outstanding for a shorter period of time. Our analysis showed that the yield differential rises and falls with the overall level of Treasury bill yields. Figure 8 shows short-term Treasury yields declined from an average of about 4.8 percent in fiscal year 2001 to 1.05 percent in fiscal year 2004. At the same time, the average yield differential fell from 47 basis points to 5 basis points. Our analysis suggests the decline in short-term yields explained about 35 percent of the decline. Since mid-2004 when short-term rates started to increase consistent with Federal Reserve actions, the yield differential has started to widen again to about 14 basis points in fiscal year 2005. Thus, absent Treasury actions, further increases in short-term rates are likely to lead to higher CM bill yield differentials in the future. If short-term Treasury yields return to the 1996–2001 average of approximately 5 percent, we found that the yield differential could exceed 20 basis points and the additional cost of CM bills—assuming current issuance patterns—could increase from $11 million to about $19 million a year. The yield differential may also be attributable in part to variables that are hard to measure, such as predictability. According to Treasury officials, a regular and predictable borrowing schedule is attractive to investors and helps to achieve Treasury’s objective of lower borrowing costs over time. However, both the timing and amount of CM bill auctions are by nature less predictable than regular Treasury bills. Providing too much advanced notice of CM bill issuance would reduce Treasury’s flexibility to adjust the amount and timing of issuance to best meet cash needs. It could also cause Treasury to use conservative estimates of its cash needs and borrow more than it actually needs. Treasury’s challenge is to provide market participants enough notice to avoid paying too high a premium for uncertainty without reducing its flexibility. Improving cash forecasting could help Treasury determine the amount of CM bills to be issued sooner rather than later and thus provide market participants more notice of CM bill auctions. Treasury provides limited information on the timing and amount of CM bill auctions ahead of the announcement. The Quarterly Refunding process is Treasury’s way to provide market participants with information and get their feedback about changes to its auction schedule and the issues actively under discussion by Treasury. As part of each Quarterly Refunding, Treasury usually issues a statement indicating that it plans to issue CM bills in the coming quarter. Treasury provides general information on the timing of CM bills, such as “early March” or “early April” but does not provide the actual date it expects to issue CM bills. In contrast, Treasury publishes the auction schedule, including announcement, auction, and issue dates for its regular bills up to 6 months in advance. Treasury typically does not provide an estimated issue amount for CM bills or regular bills in its Quarterly Refunding statements. Despite the general notice given by Treasury and the somewhat regular pattern of CM bill issuance in recent years, market participants cannot always predict the timing of CM bill auctions with certainty. For example, in fiscal year 2005, a prominent money market analyst predicted the exact date of 6 (out of 21) CM bill auctions at least 1 week ahead of the auction announcement and was off by 1 or 2 days for 9 CM bill auctions. However, the analyst’s predicted dates for 2 CM bill auctions were off by 6 days, and the analyst did not predict the remaining four CM bill auctions. Not only the timing, but also the amount of CM bill issues cannot be predicted with certainty. Treasury uses CM bills to manage cash balance swings and as a result, the amount of CM bills issued varies more than regular and predictable securities. Thus, it is not surprising that market participants cannot predict with certainty the amount of CM bill issuance. Issuing in multiple tranches may exacerbate the issue. For example, one market analyst predicted Treasury would auction one CM bill in the first week of December 2004 for $28 billion and one CM bill the following week for $7 billion. Instead, Treasury auctioned three CM bills totaling $42 billion in the first week and none the following week. Several market participants we spoke with said that uncertainty surrounding the amount offered affects their bidding and ultimately Treasury’s borrowing terms. If market participants are uncertain of the amount, they may not bid as aggressively, which potentially reduces the price and increases the yield Treasury pays on CM bills. In addition, the relative inflexibility of Treasury’s demand for cash to avoid a negative cash balance might explain the higher yields on CM bills. Given that CM bill auctions are less predictable than regular Treasury bill auctions, it is not surprising that CM bill auctions do not perform as well as regular bill auctions by some measures. Better auction performance can be characterized by greater participation and more preauction activity in the when-issued market. These factors theoretically reduce Treasury’s borrowing costs. By most measures of participation and activity we examined, CM bill auctions perform less well than 4-week bill auctions. However, some other measures indicate that Treasury obtained a better price at CM bill auctions compared with 4-week bill auctions and that there is stronger demand for CM bills than 4-week bills. In order to lower borrowing costs, Treasury seeks to encourage more participation in auctions. In general, large, well-attended auctions improve competition and lead to lower borrowing costs for Treasury. We found that fewer bidders in total are awarded CM bills than 4-week bills. For example, of the 55 CM bill auctions held between 2002 and 2005, more than half had 16 or fewer awarded bidders. In contrast, only about 4 percent of the 209 4-week bill auctions had 16 or fewer awarded bidders while about 50 percent had at least 22 awarded bidders. We also found that preauction trading activity is sparse before CM bill auctions. Treasury auctions are preceded by forward trading in markets known as “when-issued” markets. The when-issued market is important because it serves as a price discovery mechanism that potential competitive bidders look to as they set their bids for an auction. When- issued trading reduces uncertainty about bidding levels surrounding auctions and also enables dealers to sell securities to their customers in advance of the auction so they are better able to distribute the securities and bid more aggressively, which results in lower costs to Treasury. We counted the number of preauction trades on the day of CM bill auctions and found that when-issued trading is lower prior to CM bill auctions than regular 4-week bill auctions. A broad range of participants generally improves competition and theoretically maximizes the price investors pay for Treasury securities. Alternatively a higher concentration—a large share of the auction awarded to few participants—could reduce competition and restrict a security’s supply in the secondary market, preventing its efficient allocation among investors. We evaluated the share of the auction awarded to the top five bidders—a measure used by Treasury in its own studies of auction performance—and found that the share was 60 percent or higher in over half (34 of 55) of CM bill auctions from fiscal year 2002 through 2005. In contrast, the share exceeded 60 percent in only 18 percent (37 of 209) of 4-week bill auctions in the same period. While a higher concentration theoretically reduces competition and the price investors pay, according to Treasury a high concentration ratio in CM bill auctions may imply some bidders really want a particular bill, which may drive the price up and yield down. Although greater participation, a broader distribution of awards, and more preauction activity in the when-issued market theoretically improves Treasury’s borrowing costs, we did not find a statistically significant relationship between these factors and the yield differential. However, we did find that concentration was negatively correlated with the auction spread. The auction spread is the difference between the yield Treasury obtains at auction and the yield in the when-issued market at the time of auction. A positive spread (where the auction yield is more than the contemporaneous when-issued yield) indicates Treasury paid a higher yield than expected, which is a negative auction result. Our statistical analysis suggests that Treasury was less likely to pay more than the expected yield indicated in the when-issued market in more concentrated auctions. In other words, higher concentration seemed to improve Treasury’s auction results. App. II provides more information on our CM bill auctions analysis. Treasury could maintain higher cash balances or issue regular Treasury bills (e.g., the 4-week bill) to avoid issuing a CM bill to meet short-term cash shortfalls. When evaluating CM bills relative to other alternatives, it is important to look at total borrowing costs—not just the yield. Treasury’s borrowing costs are based on the amount borrowed, the yield it pays to borrow, and the time the debt is outstanding. We found that CM bills are generally less costly than currently available alternatives despite their higher yield. To avoid issuing a CM bill, Treasury could run higher cash balances to bridge cash financing gaps. However, it is generally more cost-efficient to repay debt and then issue a CM bill than run higher cash balances because the interest earned on excess cash balances is generally insufficient to cover borrowing costs. Treasury’s current cash balance target is $5 billion, which represents the amount to be held at the Federal Reserve. Treasury invests excess cash above the $5 billion target in Treasury Tax and Loan (TT&L) accounts. TT&L accounts are held at financial institutions and earn interest rates equal to the federal funds rate less 25 basis points. The rate earned on TT&L accounts is generally less than the average rate Treasury pays on CM and regular short-term bills. As a result, Treasury faces a negative funding spread. The funding spread varies over time and depends on Treasury bill rates relative to the federal funds rate—increases in Treasury bill yields relative to the federal funds rate increase Treasury’s negative funding spread, and declines in Treasury bill yields relative to the federal funds rate reduce Treasury’s negative funding spread. As a result of the negative funding spread, Treasury strives to minimize cash balances in order to reduce overall borrowing costs. To do this, Treasury has worked toward improving cash forecasting. Treasury has reduced the average cash forecasting error by one-half over the last 6 years according to Treasury officials. They credited improvements to better technology and communication with their “lockbox banks” that process certain tax payments. For example, the Electronic Federal Tax Payment System provided actual cash flow information to Treasury forecasters through electronic notification of pending tax payments, replacing imperfect forecasts. Increasing the earnings on excess cash balances is another way to narrow the negative funding spread. Figure 9 below describes steps Treasury has taken to increase earnings on its excess cash balances. To make large payments when cash balances are low, Treasury can issue a CM or a regular Treasury bill, although as explained earlier Treasury tries to limit changes in the size of regular bill issuance from week to week. When comparing the cost of a CM bill with the cost of a 4-week bill—the regular bill with the shortest maturity—we see that the CM bill is generally less costly for shorter-term needs (i.e., less than 28 days) despite its higher yield. While the 4-week bill may have a lower yield, the amount borrowed is generally outstanding for a longer period. For example, Treasury issued a CM bill on June 3, 2005, for $16 billion. The daily cost of borrowing was about $1.3 million. If Treasury had borrowed $16 billion using the 4-week bill issued the day before, the daily borrowing cost would have been only $1.2 million but the amount would have been outstanding for 16 days longer and cost an additional $18.4 million (see table 3). However, the extra cost of the 4-week bill would be partially offset by the amount earned on cash balances. To compare the total cost of the 4-week bill with the cost of a CM bill requires also looking at what happens to the cash that would have been used to pay the maturing CM bill. CM bills are typically issued in months with midmonth cash inflows. If Treasury issued a 4-week bill instead of a CM bill in these months, these midmonth cash flows could be held in TT&L accounts. However, since Treasury generally earns less on excess cash balances than it pays to borrow, the additional borrowing costs associated with the 4-week bill may not be completely offset. Treasury has taken steps to reduce use and overall cost of CM bills. Lowering borrowing costs of CM bills could be achieved by combinations of reducing the dollar amount issued or reducing the term to maturity (i.e., the number of days outstanding). Recognizing that CM bills were a relatively costly way to absorb cash balance swings, Treasury introduced the 4-week bill in 2001 to help reduce the use of CM bills. Treasury has also increasingly issued CM bills in multiple tranches, which contributes to smaller average issues, shorter terms to maturity, and lower total borrowing costs. Borrowing costs declined in 2002 as Treasury reduced the use of CM bills for longer-term borrowing (i.e., 28 days or more) and short-term rates declined. However, borrowing costs associated with CM bills have increased since 2003. Treasury has reduced reliance on CM bills since 2001. Initially, Treasury reduced the total amount of CM bills issued by more than half from $346 billion in fiscal year 2001 to only $124 billion in fiscal year 2002 (see fig. 10). This was the lowest amount issued in the previous 6 years. The decline in CM bill issuance, however, was only temporary. Since 2002, Treasury has increased its use of CM bills and in fiscal year 2005, Treasury issued $268 billion in CM bills. In 2001, Treasury said that CM bills were not the most cost-efficient means to absorb cash balance swings. At that time, the yield differential was about 47 basis points and the amount of CM bills issued was about 20 percent of Treasury’s short-term financing. Treasury introduced the 4-week bill in 2001 to help reduce the need for CM bills by helping to smooth swings in cash balances. In general, shorter-term securities provide Treasury greater flexibility to adjust cash balances and outstanding debt in response to actual cash needs. From fiscal year 2002 through 2005, CM bills represented only 7 percent on average of Treasury’s short-term debt issuance. However, according to Treasury, the ability of the 4-week bill to absorb cash balance swings is limited to swings in cash balances that are longer than the two-week cash financing gap at the beginning of most months. Treasury has taken steps to borrow cash closer to the day it is needed, which contributes to smaller average issues, shorter terms to maturity, and lower borrowing costs. Since 2003, Treasury has increasingly issued CM bills in multiple tranches—successive shorter-term CM bills that mature on the same day. For example, instead of issuing one $30 billion or $40 billion CM bill on the 1st that matured on the 15th, Treasury might issue three smaller CM bills on the 1st, 3rd, and 7th (all matured on the 15th). As mentioned earlier, issuing in multiple tranches may increase the rate paid, but it allows Treasury to borrow closer to the time when cash is needed and ultimately reduces borrowing costs by reducing the average term to maturity and annualized amount outstanding. Table 4 shows how issuing in multiple tranches would have reduced borrowing costs in June 2005. The move toward multiple tranches has led to smaller CM bill issues on average. Figure 11 shows that the average CM bill issue has declined by more than half since 2001. Following introduction of the 4-week bill in 2001, Treasury reduced the use of CM bills for longer-term borrowing (i.e., 28 days or more). Figure 12 shows that Treasury has reduced the average term to maturity (i.e., length of time outstanding) for CM bills issued by about half since 2001. Prior to fiscal year 2002, Treasury issued CM bills with terms as long as 83 days. Since 2002, the longest maturing CM bill was 19 days. Lower average issue sizes together with the reduced term to maturity contributed to lower borrowing on an annual basis (see fig. 13). After Treasury introduced the 4-week bill in 2001, the annualized amount of CM bills outstanding declined dramatically from $19.2 billion in fiscal year 2001 to $3.3 billion in fiscal year 2002. However, since then the amount has generally increased as Treasury increased the use of CM bills to finance intramonth cash financing gaps. From fiscal year 2003 to fiscal year 2004, the annual amount outstanding more than doubled to $8.1 billion. Treasury issued a similar amount of CM bills in fiscal year 2005. While Treasury’s actions from fiscal years 2001 to 2002 reduced CM bill borrowing costs, overall declining interest rates also helped. Borrowing costs began declining in 2001 and dropped dramatically in 2002 (see fig. 14). Treasury’s smaller CM bill issuances and reduced term to maturity helped reduce the annual amount outstanding. According to our analysis, these actions taken by Treasury contributed to about $610 million of the reduction in borrowing costs from 2001 to 2002. We estimated that the remaining decline was due to reductions in the yield paid on CM bills. As noted earlier the yield differential varies with the overall level of short-term yields in the economy. As these yields declined so did the yield differential—by 43 basis points from fiscal year 2001 to 2002. However, since 2003 the borrowing costs associated with CM bills have increased. Our analysis indicates that almost all of the increase in CM bill borrowing costs is due to increasing rates. While Treasury cannot control the overall level of short-term interest rates in the economy, it can continue to take steps to reduce the use of CM bills and, according to our analysis, the CM bill yield differential. We identified a range of options that may reduce the use or cost of CM bills. The most promising options in our view—exploring ways to better align cash flows and increasing the earnings on cash balances—are discussed first. Other options, such as introducing a new shorter-term regular bill and enhancing the transparency of CM bill auctions, are discussed later. Given that Treasury has increasingly used CM bills to fill regular cash financing gaps, taking steps to better align large cash flows might help reduce the use of CM bills. There are three ways to do this: smoothing the payment of large federal expenditures, smoothing the payments of corporate and nonwithheld individual tax payments, and aligning increased debt issuance with large payments. If cash flows had been aligned to better fill the frequent cash financing gaps, Treasury may not have needed the 11 CM bills it issued on the 1st through the 3rd of the month that matured on the 15th of the month during fiscal year 2005 and could have reduced CM bill borrowing cost by as much as $174 million or about 80 percent in fiscal year 2005. Statutory and regulatory changes would be required to change the timing of federal benefit payments and tax collections. While implementing any changes is outside debt managers’ control, Treasury could start a discussion with other agencies and Congress to identify the costs and benefits of alternatives to align or more evenly distribute federal expenditures and tax receipts and seek any statutory authority necessary to better smooth them. However, changing the timing of benefits and tax collections, either jointly or independently, will have direct and indirect effects not only on borrowing costs but also on individuals, nonfederal entities, and federal government operations. These of course, would have to be considered when making decisions about whether and how to smooth federal cash flows. Looking at SSA’s experience with spreading the payments for new Social Security beneficiaries throughout the month may be useful. Treasury could also explore aligning increased borrowing capacity with large payments in the beginning of the month to help reduce the cash financing gap and reduce the use of some CM bills. Overall borrowing needs are projected to be significant and grow over the long term and Treasury will need to consider changes to its debt portfolio and auction schedule to increase borrowing. Cash flow considerations can be a tiebreaker when choosing between equally attractive alternatives. In the past, when making changes to the range of maturities offered, Treasury has considered aligning cash inflows with outflows in the beginning of the month. Going forward, Treasury may reduce its reliance on CM bills—at least temporarily—by continuing to consider adjustments that would better align its increased debt issuance with its largest cash payments in the beginning of the month. Currently, Treasury faces a negative funding spread and as a result it is more costly to maintain high cash balances to meet upcoming payments than issue a CM bill. However, increasing the earnings on cash balances would reduce the costs associated with running higher cash balances and may ultimately reduce the use of some CM bills. Treasury previously explored increasing the rate earned on TT&L accounts; however, banks objected. Since then, Treasury introduced the TIO program, which pays higher rates on cash balances than the TT&L program. Increasing balances in TIOs relative to the TT&L accounts would help Treasury increase earnings on its cash balances. However, Treasury would lose some flexibility because TIO balances are not callable in the event cash balances unexpectedly fall below cash needs. Treasury could also explore broader options to increase earnings on cash balances. For example, some countries’ debt management offices engage in reverse repurchase agreements. In a reverse repurchase agreement, Treasury would lend market participants cash to purchase securities in the secondary market. The borrower would then return the cash borrowed plus interest at a specified time, usually overnight. Reverse repurchase agreements could potentially narrow Treasury’s negative funding spread because repurchase agreement rates are generally higher than rates earned on TT&L accounts. Treasury could explore the benefits and cost of designing a new system to perform these transactions but implementing reverse repurchase agreements would require legislative authority according to Treasury. Another way to smooth cash flows would be to introduce a shorter-term instrument. The 4-week bill was partially successful in smoothing cash flows throughout the year and reducing the use of CM bills. However, in recent years, Treasury has increasingly used CM bills to fill cash financing gaps that frequently occur in the first two weeks of the month. Introducing a regular shorter-term bill might help fill the frequent cash financing gap and reduce the use of CM bills further. A new bill that matures on Thursdays like other regular bills would not give the Treasury the same flexibility as CM bill issuance because the beginning-of-month outflows and midmonth inflows often fall on different days of the week. Alternatively, a new short-term security with specific issue and redemption dates (rather than day of the week) might help the Treasury manage cash flow gaps. In Treasury’s view, a shorter-term security would not likely generate market interest in a way that would distinguish it, on a cost basis, from CM bills. However, our analysis of CM and similar-maturing outstanding bills in the secondary market shows that buyers pay higher prices (or accept a lower yield) for short-term bills than Treasury currently accepts (or pays) for CM bills. Market participants we spoke with expressed mixed views on the demand for another short-term Treasury security. While some said there may not be enough demand for a short-term instrument, others said that bills maturing on the 15th, for example, would have natural buyers and give investors more flexibility by offering another maturity date for short-term instruments. Our analysis shows that CM bills maturing on the 15th of months when tax payments are due likely have lower yields than CM bills maturing on other days. Treasury can examine whether it would obtain better prices on a shorter-term bill with issue and maturity dates on specific days of the month. Increased transparency on the potential size of CM bills might improve bidding in CM bill auctions and potentially reduce the yield Treasury pays on CM bills. While market participants might expect Treasury to issue CM bills in the beginning of the month, market participants told us that they do not always know the size of the CM bill offering. As a result, they may not bid as aggressively. However, there are tradeoffs to consider. In order to provide market participants more advanced notice on the general size and timing of CM bills, Treasury would have to improve its own cash forecasting. Also, Treasury achieves lowest-cost financing in part by providing the market with certainty. Trying to add certainty to CM bill issuance would eliminate the benefit of flexibility they provide and may actually increase borrowing costs. Treasury limits the maximum auction award to a single bidder to 35 percent of the total amount offered to the public. The 35-percent cap is intended, in part, to foster a liquid secondary market for a new issue by ensuring adequate and wide distribution of the supply of a security among investors and prevent temporary shortages or “short squeezes.” However, our analysis suggests that relaxing the 35-percent rule for CM bill auctions (and thus allowing higher concentration) might promote more aggressive bidding, improve auction prices for Treasury, and thus reduce the borrowing costs associated with CM bills. There are a number of issues to explore. For example, market participants may come to expect poorer liquidity for CM bills, which may lead to less aggressive bidding over time. Further, there would be a higher risk of a squeeze in the CM bills market, although the risk would be relatively small in our view because CM bill trading is sparse both before auctions and after auctions according to our analysis. (See app. II for more information.) Treasury could explore adjusting the per-bidder cap on an experimental basis and determine whether there are benefits of relaxing the existing 35-percent rule in CM bill auctions. Lastly, exploring other countries’ practices may provide useful insights. For example, other countries’ debt management offices use repurchase agreements as a tool to support their cash management. Exploring other countries’ experiences may provide insights on the benefits and costs of repurchase agreements for a central government. Although repurchase agreements generally have slightly higher yields than CM bills, they provide an alternative way to obtain cash for short periods, usually overnight. In fiscal year 2005, Treasury announced that it was examining the feasibility of a securities lending facility, which would operate much like a repurchase agreement. Although this facility is still in its early proposal process, Treasury generally intends to lend securities that are in such short supply that they may threaten the settlement of Treasury market transactions in a timely manner. In return, Treasury would receive cash or bonds. While Treasury intends borrowers to use this facility at their discretion and does not plan to use it for Treasury’s own cash needs, it might also consider how the facility would affect its cash balances and whether the lending facility could be used to obtain cash for very short periods. In the face of persistent federal deficits and growing net interest costs, reexamining debt management practices is warranted. Treasury has made progress toward reducing the cost of CM bills, but it may be possible to do more. This report presents options worth exploring that taken alone or in combination may further reduce federal borrowing costs by reducing either the use or the cost of unscheduled CM bills. CM bills will continue to be a necessary debt management tool to meet unexpected cash needs when Treasury has low cash balances or when Treasury is nearing the debt ceiling. However, in recent years, Treasury has increasingly used CM bills to fill cash financing gaps that frequently occur in the first two weeks of the month. Our analysis indicates that the yield differential between CM bills and outstanding bills of similar maturity has increased as short-term rates have risen. If these rates rise further, as market participants expect, and return to levels consistent with a longer-term historical average, the CM bill yield differential is likely to rise above levels seen in recent years. While Treasury does not vary its debt management strategy in response to changing interest rates, it should be mindful that increasing rates are likely to raise the relative cost of unscheduled CM bills. As a result, Treasury should consider options, including better aligning cash flows and increasing earnings on cash balances, that may reduce the frequent use of CM bills and ultimately overall borrowing costs. We identified options that could potentially reduce the use and cost of CM bills. We recommend that Treasury explore options such as those discussed in this report and any others it identifies that may help Treasury meet its objective of financing the government’s borrowing needs at the lowest cost over time. We recognize that there are a number of tradeoffs to consider. In its exploration, Treasury should consider the costs and benefits of each option and determine whether the benefits—in the form of lower borrowing costs—to the federal government (and so to taxpayers) outweigh any costs imposed on individuals, businesses, and other nonfederal entities. Treasury should also consider how options may be combined to produce more beneficial outcomes. Implementing some of these options would require changes to statute or regulations. If Treasury determines that any of these changes would be beneficial, we encourage Treasury to begin discussions with relevant federal agencies and the Congress about obtaining the necessary authorities. We requested comments on a draft of this report from Treasury and the Federal Reserve. In oral and written comments, Treasury generally agreed with our findings, conclusions, and recommendations. Treasury said that it is committed to continuing to explore ways to further reduce financing costs through changes in the use of CM bills and that many of the options we identified are embodied in its current debt management policy. Treasury emphasized that statutory authority is needed for some options, particularly changing the timing of receipts and expenditures and improving earnings on excess cash balances. Treasury also suggested some technical changes throughout the report that we have incorporated as appropriate. Treasury’s comments appear in appendix IV. In addition, the Federal Reserve Board provided technical comments that we incorporated as appropriate. As you know, 31 U.S.C. § 720 requires the head of a federal agency to submit a written statement on actions taken to address our recommendations to the Senate Committee on Homeland Security and Governmental Affairs and to the House Committee on Government Reform not later than 60 days after the date of this report. A written statement must also be submitted to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. Because agency personnel serve as the primary source of information on the status of recommendations, we request that the agency also provide it with a copy of your agency’s statement of action to serve as preliminary information on the status of open recommendations. We are sending copies of this report to the Chairs and Ranking Members of the House Committee on Ways and Means, the Senate Committee on Finance, the House Committee on Financial Services, the Senate Committee on Banking, Housing and Urban Affairs, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report please contact Susan J. Irving at (202) 512-9142 or irvings@gao.gov or Thomas J. McCool at (202) 512-2700 or mccoolt@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff making key contributions to this report are listed in appendix V. To identify which cash management bill (CM bill) features, if any, explained the higher yields paid on CM bills, we performed a statistical analysis of CM bills issued during fiscal years 1996–2005. The dependent variable in the regressions was the difference between a CM bill’s investment yield at the time of auction and the investment yield on similar-maturity Treasury bills (i.e, the “yield differential”), where the latter is measured by the weighted average yield on Treasury securities that mature immediately before and after the CM bill. We regressed the yield differential on key CM bill features including announcement period, term to maturity, issuance amount, and whether the CM bills were off-cycle or a reopening of a previous issue. We also examined the effects of CM bills with different issue and maturity dates, such as whether the CM bills were issued on the 1st–3rd, matured on a large tax payment date, or were issued during a debt issuance suspension period (DISP). While we analyzed the yield differential for all CM bills issued during fiscal years 1996–2005, we focused on the 55 CM bills issued during fiscal years 2002–2005 because the introduction of the 4-week bill in 2001 led to a significant reduction in the amount and term to maturity of CM bills and caused a structural change in the CM bill market. Our empirical results suggest that several variables affected the yield differential during the period studied. Lower yield differentials appeared to be associated with lower short-term interest rates, relatively low CM bill auction amounts, and maturity around large tax payment dates. There is also some evidence that CM bills have higher yields when issued in multiple tranches, which are successively issued CM bills with the same maturity date. Despite these findings, the opportunity for Treasury to achieve additional savings by further exploiting characteristics that affect CM bill yields appears limited. The existing literature on CM bills and their costs is limited. In an early study of CM bills issued from 1980 through 1988, Simon found that from 1 day before to 1 day after announcement, the average interest rate differential between CM bills and adjacent-maturity Treasury bills increased by 20 basis points to a statistically significant 28 basis points. An analysis of variance demonstrated that the increase in the differential was greater for CM bills that had shorter maturities, were part of larger issues, or had shorter when-issued periods. In a later study, Simon found that from January 1985 through October 1991 CM bills cost about 6 basis points more than regular bills. There have been many fundamental changes in the way Treasury raises short-term cash since the article’s publication that may make its findings less applicable now. These changes include the move from multiple-price to single-price auctions, the growth of the repurchase agreement market and the prevalence of off-cycle CM bills. Most important was the addition of the 4-week Treasury bill to Treasury’s regular borrowing schedule in 2001, which led to a significant reduction in the amount and term to maturity of CM bills. In a study of the effect of reopenings on the liquidity of Treasury bills, Fleming included a binary variable identifying CM bill reopenings. Fleming studied Treasury bills issued from 1996 to 2000 and found that reopenings of any kind had a positive and significant effect on yields. Moreover, the CM bill variable was significantly positive for 13-week bills, demonstrating that regular bills reopened as CM bills tend to have higher yields. Fleming interpreted the results as showing that the yield-reducing effect of enhanced liquidity is more than offset by the yield-increasing effect of an increase in supply. In a recent paper, Seligman evaluated the differential between CM bill yields and the yields on Treasury bills with similar maturity dates. Seligman found that CM bills that were issued off-cycle or were large (relative to outstanding Treasury bills of similar maturity) had higher yields than other CM bills. In contrast, CM bills with longer durations or that had 2-day notices before auction had lower yields than other CM bills. This study suggested that Treasury could reduce the yields of CM bills by avoiding off-cycle issuances, reducing their relative size, issuing CM bills with longer terms, and giving 2 days notice in advance of auctions. However, the data in Seligman’s study covered auctions held between 1990 and 1999. Seligman’s sample did not cover more recent CM bill auctions that occurred since 2001 when the 4-week bill was introduced and only very short-term CM bills were issued. Hence, the findings of his study may not apply to the current environment. In another recent study based predominantly on auctions held before the introduction of the 4-week Treasury bill, Christopher found that the cost of CM bills, as measured by the spread between CM bill and repurchase agreement yields, is negatively influenced by the time between the sale of a security and the settlement date. The rationale offered for this finding is that the delay allows administrative efficiencies. Christopher, like Seligman, found that longer-maturity CM bills have lower yields than shorter maturities. She suggests that the optimum maturity of a CM bill is nearly 93 days. This finding highlights the inapplicability of earlier research to the current environment when 4-week Treasury bills are available to help meet short-term financing needs. Choosing the appropriate reference point for CM bill yields is important, and differed in earlier studies. Christopher focused on the difference between CM bill and repurchase agreement yields instead of the difference between yields on CM bills and Treasury bills with similar maturities, which was the focus of Simon’s earlier research, Seligman’s, and our own research. Although repurchase agreements and CM bills are both means to obtain cash in the short-term, they have a major difference—CM bills, like regular Treasury bills, are risk free whereas repurchase agreements issued by private borrowers involve some risk. As a result, we focus on the difference between CM bill and outstanding Treasury yields because it provides a more direct indicator of the higher yield that Treasury pays when issuing CM bills. Financial market analysts we spoke with agreed that this yield differential measure was an appropriate focus for our research. Specifically, our estimate of the yield differential is the difference between a CM bill’s yield and the average secondary-market yield on other Treasury bills that are most similar (in terms of maturity) to the CM bill on the day of auction. That is, we compare CM bill yields with yields on the two nearest- maturing—one before and one after—Treasury bills. CM bill yields were obtained from the Bureau of the Public Debt (BPD) while rates on similar- maturity outstanding Treasury bills were obtained from the Wall Street Journal (WSJ). For each Treasury bill, the bid and ask rates were converted to yields and averaged. Next, the weighted average yield for the two bills nearest in maturity to the CM bill was derived. The weights were based on the relative difference in each bill’s maturity date from that of the CM bill, with the Treasury bill having a closer maturity date receiving a greater weight and the weights summing to one. In the final step, the weighted average Treasury bill yield was subtracted from the CM bill auction yield to obtain the yield differential. There are limitations of our yield differential estimate. For example, any effect from the announcement of CM bills on yields for similar-maturing bills is not captured. If the announcement of a CM bill increased the yield on similar-maturing bills, then our estimate may be understated. Also, in some cases, the surrounding Treasury bills we used could include CM bills that were reopenings of regular Treasury bills. This would also lead to an understatement of the yield differential because the yield on the outstanding securities including CM bills would be higher than outstanding securities that did not include CM bills. However, few CM bills issued in fiscal years 2002–2005 were reopenings. Alternatively, if the yield curve were upward sloping and concave, the curvature of the yield curve would result in a positive estimate of the spread even before considering the effects of other factors and create a positive bias in our calculation of the yield differential. Table 5 provides data on key attributes of CM bills, many of which we used in an attempt to explain the yield differential. The table divides the data into two subperiods. The first subperiod covers auctions held from fiscal year 1996 through fiscal year 2001 before the introduction of the 4-week Treasury bill, while the second subperiod includes auctions held from fiscal year 2002 through fiscal year 2005 after the 4-week bill’s introduction. Table 5 highlights the significant difference in CM bill maturities between the two subperiods. For the 66 CM bills auctioned in fiscal years 1996 through 2001, the average maturity exceeded 27 days while the longest- maturity CM bill had a term of 83 days. In contrast, in the 55 CM bill auctions held from fiscal year 2002 through 2005 the average maturity of the CM bills was only 9.6 days while the longest-maturity CM bill had a term of 19 days. In the more recent period even the maximum maturity of 19 days was much shorter than the average maturity of 27.4 days during the 6 years before 4-week Treasury bills were introduced. Treasury also reduced its reliance on CM bills for short-term financing after 2001. The average dollar amount of CM bills issued at each auction declined from $21.6 billion in fiscal years 1996 through 2001 to only $14.8 billion in fiscal years 2002 through 2005. In contrast, Treasury increased issuance of regular short-term Treasury bills. Before fiscal year 2002 the average amount of outstanding Treasury bills with similar maturities to newly auctioned CM bills was $33.1 billion. The average rose to $65.9 billion in the more recent period. As a result, newly auctioned CM bills averaged less than one-fourth of the average amount of outstanding Treasury bills with similar maturities during the more recent subperiod compared with three-fourths during the earlier subperiod. This reflects the importance of 4-week Treasury bills during the later period. Many of the features listed in table 5 were represented by binary variables set equal to 1 if a CM bill had the characteristic and 0 if it did not. Binary variables include: auction during a DISP, issuance on the 1st–3rd days of the month, maturation on a tax due date and a combination of issuance on the 1st–3rd days and maturation on a large tax due date. Additional binary variables capture whether or not a CM bill matured between April 15 and the end of the month and whether or not a CM bill was part of a multiple tranche. For each binary variable, table 5 shows the percentage of CM bills that had a particular feature. Among the most notable change in these characteristics between the two subperiods was the more than twofold increase in the share of CM bills issued off-cycle from 39 percent to 86 percent. Because of the dramatic decrease in CM bill maturities and reduced reliance on CM bills for short-term financing after the introduction of the 4-week bill, our effort to identify characteristics that might affect the yield differential focused on CM bill auctions held from fiscal years 2002 through 2005. Column A of table 6 provides the estimated coefficients and summary statistics for an equation that includes attributes that have a significant effect on the yield differential. Using the same specification for the 66 CM bill auctions for the earlier period that extended from fiscal years 1996 through 2001 produced results that differ significantly from the estimates for the more recent period. Testing this specification for structural change using a Chow test resulted in an F-statistic of 5.18, which permitted us to reject the hypothesis that the relationship remained stable between the two periods at the .01 level of confidence. This test result provided support for our decision to focus on auctions held after the 4-week Treasury bill was introduced. Our analysis suggests that investors may require a proportionate rather than an absolute differential as compensation for unscheduled CM bills. While previous research has not examined whether the yield differential is correlated with the overall level of Treasury bill yields, figure 8 in the main text indicates that the differential may tend to move in the same general direction as the level of secondary-market yields on Treasury bills with similar maturity. The estimated coefficient of the Treasury bill yield shown in column A of table 6 is 0.038, which suggests that a 1-percentage-point increase in the Treasury bill yield is associated with a 3.8-basis-point increase in the yield differential. During fiscal year 2005, for example, the average yield on Treasury bills with maturities comparable to newly auctioned CM bills was 2.42 percent and the yield differential averaged 13.5 basis points. If the yields on comparable-maturity Treasury bills had been 5 percent instead of 2.42, the results imply that yield differentials would have been about 10 basis points higher in fiscal year 2005 than they actually were. A continuation in the rise in Treasury bill yields that began in 2004 is therefore likely to result in an increase in the yield differential. Our results show that as the supply of a CM bill rises relative to the supply of similar investment alternatives, the relative price of the CM bill declines and the yield differential increases. The results for the 55 auctions held from fiscal years 2002 through 2005 suggest that the yield differential rises with an increase in the ratio of the amount of CM bills auctioned to the average amount of similar-maturity Treasury bills outstanding, as shown by the positive and significant coefficient of 0.282 for this variable. The variable’s significant positive coefficient is consistent with the results of Simon, Fleming, and Seligman. In studying the relationship between auction size and yields, Seligman noted that an increase in the relative amount of CM bills auctioned could have two opposite effects on their relative yields. On the one hand, a higher relative amount could increase liquidity and therefore reduce the yield differential. On the other hand, a higher yield differential might be necessary to attract sufficient investor interest when the amount of CM bills being auctioned is large relative to outstanding Treasury bills with similar maturities. Our results suggest the supply effect dominates. The estimated coefficient—0.282—of the average amount of similar- maturity Treasury bills outstanding implies that a $1 billion increase in the amount of CM bills auctioned would raise the yield differential 0.43 basis points in 2005, other things constant. This coefficient can also be used to indicate how much higher the yield differential might have been if CM bills had remained as important a source of short-term Treasury financing in fiscal year 2005 as they were in the years before the 4-week Treasury bill was introduced. Multiplying the coefficient of 0.282 by 0.687, which was the average ratio of the amount of CM bills to the average amount of similar-maturity Treasury bills from 1996 through 2001, rather than the actual 2005 ratio of 0.190 raises the yield differential 14 basis points. In other words, if CM bills were used as intensively in fiscal year 2005 as they were during earlier years before the 4-week Treasury bill’s introduction, the yield differential might have been 28 basis points in 2005 instead of the actual differential of 14 basis points. Our work suggests that maturity on or around a tax due date, which is a feature not studied in previous research, reduces the yield differential. Table 6 shows that there is a 12 basis point reduction for CM bills that mature from the 15th to the end of April. Similarly, CM bills that mature on tax due dates other than April 15 (i.e., the 15th of March, June, September, or December) tend to have lower yield differentials by nearly 12 basis points. This may be explained by empirical evidence suggesting that regular Treasury bills whose maturity dates immediately precede corporate tax payment dates have special value because corporate treasurers may wish to invest excess cash in securities whose cash flows can be used to liquidate cash liabilities. Financial market participants informed us that CM bills issued in several tranches may lead to cautious bidding and therefore require higher yields to attract investors. Multiple-tranche CM bills are CM bills that are issued on different days within a short period that have the same maturity date. Previous research has not studied this feature of CM bills. Reliance on multiple-tranche CM bills increased from slightly less than one-third of issues in the earlier subperiod to approximately one-half of CM bills issued during fiscal years 2002–2005. To test whether multiple-tranche issues affect CM bill yields, we included a variable identifying this type of CM bills in the equation shown in table 6, column A. The results suggest that CM bills that are part of a multiple tranche might have yields that are 6.0 basis points higher than other CM bills. However, while the coefficient is significant at the 10 percent level, it is not significant at the 5 percent level. Moreover, the estimated multiple- tranche coefficient is not as robust as other coefficient estimates. For example, with the inclusion of a variable representing the number of days advance notice before auction in column E, the coefficient of multiple- tranche CM bills is no longer significant even at the 10 percent level. Accordingly, the evidence that multiple-tranche issues increase the yield differential should be viewed with caution. We also tested whether the issuance of CM bills on the 1st–3rd days of the month or during DISPs affected the yield differential. Because one-half of all CM bills were issued on the 1st–3rd days of the month in recent years, these issues might in some sense be considered more regular and thus require lower yields than CM bills issued at other times. However, the estimated coefficient of a variable identifying CM bills issued on the 1st–3rd days of the month was insignificant. CM bills have been a useful tool for Treasury when approaching the debt ceiling. From fiscal years 2002 through 2005, 20 percent of CM bills were issued during DISPs—a period for which the Secretary of the Treasury has determined that obligations of the United States may not be issued without exceeding the debt ceiling. While the coefficient of a variable identifying CM bills issued during DISPs had a negative sign, it was not significantly different from zero. Studies by Seligman and Christopher identified several other variables that had significant effects on the differential between yields on CM bills and either the yield on regular bills with similar maturity or the yield on repurchase agreements. As noted earlier, however, these studies employed samples consisting either entirely or mainly of observations from CM bill auctions held before the introduction of the Treasury bill in fiscal year 2001. The only finding from earlier studies that appears to remain valid in this new environment of shorter-maturity CM bills is that an increase in the ratio of CM bills auctioned to the outstanding amount of similar-maturing bills tends to increase the yield differential. Both Seligman and Christopher found that CM bills with longer terms to maturity were relatively less costly. The equation in Column B of table 6 includes a variable for a CM bill’s term to maturity. In contrast to earlier findings, we found that the coefficient for term to maturity is positive and insignificant. This may be because CM bills are now concentrated at the very short end of the maturity spectrum leaving little room for increases in maturity to affect the yield differential. Christopher found that the number of days between the auction of a CM bill and the settlement date reduced the differential between the CM bill yield and the yield on repurchase agreements. The explanation for this relationship is that the delay allows administrative efficiencies. However, the estimated coefficient of the number of days between auction and settlement (in column C of table 6) is insignificant. In the sample of CM bill auctions that Christopher studied, CM bills that were auctioned on a Wednesday and had a maturity date beyond the end of the month tended to have lower yields. Since the inception of the 4-week Treasury bill and the truncation of CM bill maturities, only 4 of 55 issues were in this category. In an equation that also included the other variables in column A of table 6, the coefficient for a binary variable designating such issues had a t-statistic of only 0.161 which led to the rejection of the hypothesis that CM bills auctioned on Wednesdays and maturing after the end of the month have lower yields. Instead of issuing a new security, Treasury may add to, or reopen, an existing issue, increasing the amount outstanding of the issue. CM bill reopenings are fungible with previously issued regular bills and may enjoy their liquidity. Seligman found that CM bills issued off-cycle were significantly more costly than those that reopened a previous issue. However, contrary to Seligman’s results, our analysis indicates that off- cycle CM bills are not more costly than reopenings. Column D of table 6 shows the sign of the coefficient of the off-cycle variable is negative rather than positive although, more relevantly, the coefficient is not significantly different from zero. The descriptive statistics in table 5 show that 86 percent of CM bills issued during the more recent period were issued off- cycle compared with about 40 percent in the earlier period. Off-cycle issuance has become a regular feature of CM bills and does not command an extra return. Seligman also hypothesized that an increase in the number of business days between CM bill announcement and auction would reduce the yield differential. While his estimates suggest that 2 business days advanced notice reduces the yield differential, he found that increasing the announcement period beyond 2 days did not further reduce the differential. In contrast, we found that the sign for the coefficient of the number of days of advance notice was positive and significant, as shown in column E of table 6. This has the unexpected implication that increasing the number of business days notice before auction increases rather than reduces the yield differential. Because markets usually penalize uncertainty, this result appears counterintuitive and should be studied further to determine whether it arose because of a statistical problem such as the correlation between the advanced notice variable and an omitted variable that may be unobservable. Our analysis of CM bill yield differentials from fiscal year 2002 through 2005 revealed several features of CM bills that affect the yield differential; however, Treasury’s ability to achieve additional savings by further exploiting these features may be limited. The average yield differential was substantially lower during the period from fiscal years 2002 through 2005 than it was in the preceding several years. The two most important factors behind the yield differential decline between the pre- and post-2002 period were (1) the substantial decline in the general level of short-term rates and (2) the major reduction in the ratio of CM bills issued to the average amount of Treasury bills outstanding. The level of short-term interest rates is largely determined by Federal Reserve policy and market forces rather than Treasury. Treasury bill yields may continue to rise and reach levels that prevailed in the early period, thereby erasing the portion of the decline in the yield differential caused by lower interest rates. However, because the 4-week bill is now a permanent feature of Treasury’s auction schedule and has reduced Treasury’s reliance on CM bills, the portion of the decline in the yield differential attributable to the reduced ratio of CM bills to similar-maturity Treasury bills is likely to endure. The experience from fiscal year 2002 through 2005 as a whole suggests that the yield differential could remain about 13 basis points below pre-2002 levels. While Treasury may have some ability to change the relative amount of CM and regular bills issued, the mix is affected by cash flow patterns largely beyond its control. Aligning cash outflows and inflows could reduce the amount of CM bills issued, but Treasury does not have authority over the timing of all cash flows. However, Treasury does have control over debt- related cash flows. Efforts to better align the timing of net increases in debt with its largest cash payments could reduce both the size of CM bills and the yield differential. To the extent that Treasury can increase the share of CM bills that mature on dates when taxes are due, additional savings might be achieved. Already more than 60 percent of CM bills met this criterion in fiscal year 2005, so the possibility of achieving further savings through this feature may be minimal. Moreover, Treasury’s short-term borrowing needs do not uniformly occur shortly before large tax due dates. Nearly one-half of CM bills issued since fiscal year 2002 have been issued in multiple tranches. Our results provide somewhat limited evidence that multiple tranche issues may be more costly. Even if such issues were to require higher yields, however, multiple tranche issues have the advantage of reducing borrowing costs by minimizing the duration of the amounts borrowed in later tranches. For example, instead of borrowing $20 billion for 10 days, Treasury might use two tranches to borrow $10 billion for 10 days and $10 billion for 9 days, thereby saving 1 day’s interest on the second tranche. Accordingly, the savings that multiple tranche CM bills provide by reducing the number of days that interest is paid on later tranches is likely to more than offset the higher yields such issues might entail. Finally, the positive correlation found between the number of days of advanced notice and the yield differential appears counterintuitive and probably should not be used as a basis for reducing the notification period. Given that CM bill auctions are innately less predictable than regular Treasury bill auctions, it is not surprising that, by some measures, CM bill auctions are less successful than regular auctions. Better auction performance can be characterized by greater participation, a broader distribution of awards, and more preauction activity in the when-issued market. These factors theoretically improve Treasury’s borrowing costs. By most measures of participation and activity we examined, CM bill auctions perform less well than 4-week bill auctions. However, some other measures indicate that Treasury obtained a better price at CM bill auctions compared with 4-week bill auctions and that there is stronger demand for CM bills than 4-week bills. However, we did not find a statistically significant relationship between any auction performance measures and yield differentials. We did find that in more concentrated auctions, Treasury was less likely to pay more than the yield indicated in the when- issued market at the time of auction. Treasury auctions are preceded by forward trading in markets known as “when-issued” markets. The when-issued market is important because it serves as a price discovery mechanism that potential competitive bidders look to as they set their bids for an auction. When-issued trading reduces uncertainty about bidding levels surrounding auctions and also enables dealers to sell securities to their customers in advance of the auction so they are better able to distribute the securities and bid more aggressively, which results in lower costs to Treasury. However, we found that trading activity is sparse before CM bill auctions. We counted the number of preauction trades on the day of CM bill auctions and found that when-issued trading is lower prior to CM bill auctions compared with regular 4-week bill auctions (see table 7). There is generally more activity after CM bill auctions than before the auction. To evaluate the yield obtained at auction, Treasury’s auction studies used the difference between yields at auction and the contemporaneous when- issued yield at time of an auction, usually 1:00 p.m. We refer to this measure as the contemporaneous auction spread. According to Treasury, this is a good benchmark to measure auction yields because potential bidders have a choice between purchasing securities at auctions or purchasing securities in the when-issued market. A negative spread (where the auction yield is less than the contemporaneous when-issued yield) indicates Treasury paid a lower yield than expected, which is a positive auction result. Alternatively, a positive spread (the auction yield is greater than the contemporaneous when-issued yield) indicates Treasury paid a higher yield than expected from information in the market at the time of auction. This would indicate a poor auction result. By this measure, CM bill auctions performed better than 4-week bill auctions. The contemporaneous auction spread of CM bill auctions from fiscal year 2002 through 2005 was approximately zero (see table 8), which implies that the yield Treasury paid on CM bills generally reflected market information at the time of the auction. In contrast, the auction spread for 4-week bill auctions averaged a positive 0.3 basis points for fiscal years 2002 through 2005. The information derived from the auction spread depends in part on whether the when-issued market is liquid around the time of the auction. Because the level of trading activity is relatively sparse before CM bill auctions, the information provided by the contemporaneous auction spread may be limited. In order to lower borrowing costs, Treasury seeks to encourage more participation in auctions. In general, large, well-attended auctions improve competition and lead to lower borrowing costs for Treasury. For example, in the 1990s Treasury switched its auctions from multiple-price to uniform- price format in order to encourage more aggressive bids and a broader distribution of auction awards. Treasury found that the share of the auction awarded to the top five competitive bidders declined under the new auction format to about 35 percent for 2-year notes and 36 percent for 5-year notes. Using similar estimates of bidder participation in CM bill auctions, we found that the share of the auction awarded to the top five bidders was 60 percent or higher in over half of CM bill auctions held during fiscal years 2002–2005. The share exceeded 60 percent in only 18 percent (37 of 209) of 4-week bill auctions held during fiscal years 2002– 2005. Treasury limits the maximum auction award to a single bidder to 35 percent of the offering in part to foster a liquid secondary market for a new issue. A higher concentration could reduce competition and restrict a security’s supply in the secondary market preventing its efficient allocation among investors and possibly generating a “short squeeze.” The term “short squeeze” is used by market participants to refer to a shortage of a security relative to willing buyers for the same security. These squeezes arise because Treasury allows dealers to sell the security short to its customers (or other dealers) in the when-issued market before securities are auctioned. Lack of competition could also result in lower prices (higher yields) at auction. In contrast, we found generally negative correlations between concentration measures and auction spreads for CM bill auctions. In other words, in more concentrated auctions, Treasury was less likely to pay more than the yield indicated in the when-issued market at the time of auction. According to Treasury, a high concentration ratio in CM bill auctions may imply some bidders really want a particular bill, which drives the price up and yield down. Not only are CM bill auctions more concentrated, but fewer bidders in total are awarded CM bills than 4-week bills. According to Treasury officials, short-term securities have a more limited audience. From fiscal year 2002 through 2005, about 17 bidders on average were awarded CM bills in each auction compared with 22 bidders in 4-week bill auctions. More than half of the CM bill auctions had 16 or fewer awarded bidders. In contrast, only about 4 percent of 4-week bill auctions had 16 or fewer awarded bidders. Although greater participation, a broader distribution of awards, and more preauction activity in the when-issued market theoretically improves Treasury’s borrowing costs, we did not find a statistically significant relationship between these factors and the yield differential. More commonly cited measures of auction performance, such as bid-to- cover ratios and auction tails (difference between the high and average discount rates) provide information on the demand for Treasury securities and the dispersion of bids, but these measures are limited. The bid-to- cover ratio is the ratio of the amount of bids received in a Treasury security auction compared with the amount of accepted bids. In general, higher ratios signal higher demand for the security being auctioned. From fiscal year 2002 to 2005, the bid-to-cover ratio for CM bill auctions averaged 3.17 (see table 9). In contrast, the bid-to-cover ratio for 4-week bills averaged only 2.30. This suggests a stronger demand for CM bills than regular 4- week bills; however, these results should be interpreted with caution. Some market participants suggested that a high bid-to-cover ratio may arise because many dealers participate in CM bill auctions to fulfill auction requirements that are less costly to meet by participating in short-term CM bills auctions than in auctions of longer-term securities. Auction tails—the number of basis points between the high and average discount rates—are a measure of the dispersion of the bids. Auction theory suggests that the more diverse the beliefs of the bidders and the more uncertain they are about the demand for the bills, the more dispersed the bids submitted. In contrast, narrower tails indicate strong bidding and therefore lower costs to Treasury. When evaluating the auction tails of CM bills compared with regular Treasury securities, we found that CM bill tails were slightly larger than 4-week bill auctions (see table 10). In summary, most measures suggest CM bill auctions perform less well than 4-week bill auctions. However, the low participation and high concentration of CM bill auctions do not explain why Treasury paid higher yields on CM bills than investors paid for similar-maturing bills in the secondary market. Bikhchandani, Sushil, Patrik L. Edsparr, and Chif-fu Huang. “The Treasury Bill Auction and the When-Issued Market: Some Evidence.” Draft (Aug. 30, 2000). Christopher, Jan E. “Determinants of the Spread between Repo and Cash Management Bill Yields: Technical Report.” Unpublished manuscript (2005). Fleming, Michael J. “Are Larger Treasury Issues More Liquid? Evidence from Bill Reopenings.” Journal of Money, Credit, and Banking, vol. 34, no. 3 (August 2002): 707–35. Garbade, Kenneth D. “Treasury Bills with Special Value.” Fixed Income Analytics (Cambridge, Mass.: MIT Press, 1996): 181–197. Garbade, Kenneth D., and Jeffrey F. Ingber, “The Treasury Auction Process: Objectives, Structure and Recent Adaptations.” Current Issues in Economics and Finance, vol. 11, no. 2, Federal Reserve Bank of New York (February 2005). Garbade, Kenneth D., John C. Partlan, and Paul J. Santoro. “Recent Innovations in Treasury Cash Management.” Current Issues in Economic and Finance, vol. 10, no. 11, Federal Reserve Bank of New York (November 2004). Malvey, Paul F. and Christine M. Archibald. Uniform-Price Auctions: Update of the Treasury Experiences. Department of the Treasury, Office of Market Finance (Oct. 26, 1998). Nyborg, Kjell G. and Suresh Sundaresan. “Discriminatory versus Uniform Treasury Auctions: Evidence from When-issued Transactions.” Journal of Financial Economics, vol. 42 (1996): 63–104. Seligman, Jason. “Does Urgency Affect Price at Market? An Analysis of U.S. Treasury Short-Term Finance.” Journal of Money, Credit, and Banking (forthcoming). Simon, David P. “Segmentation in the Treasury Bill Market: Evidence from Cash Management Bills.” Journal of Financial and Quantitative Analysis, vol. 26, no. 1 (March 1991): 97–108. Simon, David P. “Further Evidence on Segmentation in the Treasury Bill Market.” Journal of Banking and Finance, vol. 18 (1994): 139–51. In addition to the contacts named above, Jose Oyola (Assistant Director), Richard Krashevski, Naved Qureshi, and Melissa Wolf made significant contributions to this report. Jennifer Ashford also provided key assistance. | One result of persistent fiscal imbalance is growing debt and net interest costs. Net interest is currently the fastest-growing "program" in the budget and, if unchecked, threatens to crowd out spending for other national priorities. This report was done under the Comptroller General's authority. GAO examined the Department of the Treasury's (Treasury) growing use of unscheduled short-term cash management bills (CM bills). Specifically GAO (1) describes when Treasury uses CM bills and why, (2) describes the advantages and disadvantages of CM bills, (3) describes steps taken by Treasury to reduce the overall borrowing costs associated with CM bills, and (4) identifies possible options Treasury could consider to reduce the use and cost of CM bills further. Treasury makes large, regularly occurring payments, such as Social Security and federal retirement payments, in the beginning of each month but receives large cash inflows in the middle of each month from income tax payments and note issuances. Because regular bills alone are not sufficient to fill these intramonth cash financing gaps, since 2002 Treasury has increasingly issued CM bills to bridge this gap. CM bills allow Treasury to obtain cash outside of its regular borrowing schedule in varying amounts and maturities, but Treasury pays a premium for doing so. GAO's analysis found that Treasury paid a higher yield on CM bills than that paid on outstanding bills of similar maturity in the secondary market. Treasury has taken steps to reduce the use and cost of CM bills. Treasury added a 4-week bill to its regular auction schedule in 2001, which led to reduced CM bill issuance, shorter terms to maturity, and lower borrowing costs in 2002. Treasury has also fine-tuned CM bill issuance by borrowing closer to the time when it needs cash. However, borrowing costs associated with CM bills have increased since 2003. While Treasury has made progress towards reducing the cost of CM bills, it may be possible to do more. GAO's analysis indicates that the yield differential has increased as short-term rates have risen. If these rates rise further, as market participants expect, so will the yield differential. While Treasury does not vary its debt management strategy in response to changing interest rates, it should be mindful of their effect on the relative cost of unscheduled CM bills and explore options to reduce the frequent use of CM bills and ultimately overall borrowing costs. GAO identified options worth exploring such as any additional opportunities for closer alignment of large cash flows; possible options for increasing earnings on excess cash balances; and introduction of a shorter-term regular instrument. |
VA’s health care system is divided into 22 regional Veterans Integrated Service Networks (VISN), which serve as the basic budgetary and decision-making units for determining how best to provide services to veterans at medical centers and community-based outpatient clinics located within their geographic boundaries. Spread throughout the 22 VISNs are 172 medical centers, each headed by a director who manages administrative functions, along with a chief of staff who manages clinical functions for the entire medical center. VA medical centers also have designated managers for each area of care, such as primary and specialty care. Within each area of care, there may be many clinics, which can vary in purpose and size. For example, VA has clinics that manage the care of patients who are taking prescription medication for blood clots, and, due to their more limited scope, these clinics might have a small number of providers and staff. On the other hand, VA’s primary care clinics—where physicians are responsible for the routine health needs of a caseload of patients—tend to have a relatively larger number of providers and staff. In addition, specialty care clinics, such as gastroenterology and urology, could provide patients with specific care within that specialty, such as treatment for hepatitis C and prostate cancer. In 1996, the Congress required VA to ensure that veterans enrolled in its health care system receive timely care. For outpatient care, VA established its “30-30-20” goals: routine primary care appointments are to be scheduled within 30 days from the date of request, as are specialty care appointments, and patients are to be seen within 20 minutes of their scheduled appointment time. Following reports of long waiting times from VA’s medical centers and clinics, veterans’ service organizations, veterans, the Inspector General, and us—VA began two initiatives to help identify and address waiting times problems. First, VA contracted with IHI, a Boston-based contractor, to help develop strategies to reduce waiting times. As part of this project, 134 teams from VA medical centers across the nation worked on reducing waiting times for appointments in selected primary or specialty care clinics. Over half of these teams focused on primary care. Second, VA began collecting patient waiting times data from its outpatient scheduling system—the Veterans Health Information Systems and Technology Architecture (VISTA), one of VA’s main computer systems for clinical, management, and administrative functions. Over the past few years, VA made several modifications to its appointment scheduling software to develop more reliable data on waiting times. In March 2001, VA began using these waiting times data to identify clinics that failed to meet its 30-day standard. While most veterans using the primary care clinics we visited were able to get an appointment within 30 days, many seeking specialty care often had to wait longer than 30 days for a referral. Clinics with long waiting times often had poor scheduling procedures or did not use their staff efficiently. The chiefs of primary care at the 10 VA medical centers we visited reported that 15 of their 17 primary care clinics—or about 88 percent— met VA’s 30-day timeliness standard. The other two clinics reported waiting times of 56 and 61 days. However, chiefs of specialty care at the clinics we visited reported that patients with nonurgent needs often wait in excess of VA’s 30-day standard (see fig. 1). The longest reported waiting times were in gastroenterology and optometry. At one location, veterans had to wait 282 days—more than 9 months—for an optometry appointment. These long waiting times were often the result of high percentages of patients not showing up for appointments, poor scheduling procedures, and inefficient use of staff. When veterans do not keep their appointments, some of the limited appointment slots are lost and are unavailable for other veterans. This could extend waiting times overall. Almost 60 percent of the 71 primary and specialty care clinics we visited had a no-show rate of 20 percent or greater. Gastroenterology had the highest average no- show rate at 29 percent. At one gastroenterology clinic, half of the scheduled patients did not show up for their appointments. Urology had the lowest average no-show rate at 18 percent. According to one clinic chief, patients failed to keep appointments because their health condition improved or they forgot about the appointment because it was scheduled so far into the future. Some clinics’ scheduling procedures may actually encourage no-shows. For example, some clinics schedule appointments several months in advance. Although most clinics remind patients of their appointments—by mail or telephone—we found that some reminder systems were not sufficient to ensure that patients kept their appointments. For example, over 30 percent of the clinics we visited automatically rescheduled no- shows, and some did not follow up with the veterans to determine why they had missed the original appointments. In addition, in one clinic, staff told us that the patient often was not informed of this new appointment, making it likely that the patient would miss the new appointment as well. We found that inefficient use of staff could also limit the number of available appointment slots, contributing to long waiting times. For example, some specialists told us that they were treating patients who could be seen in primary care. Specifically, one chief of dermatology told us that she receives new patient referrals for conditions that could easily be treated in primary care, such as dry skin. In addition, several chiefs of orthopedics told us that they continue to see patients with conditions such as rheumatoid arthritis and back pain because the patients request appointments, even after their conditions have stabilized. Furthermore, shortages of nonprovider staff at some clinics also resulted in the inefficient use of physician time. For example, one orthopedic clinic did not have a cast technician, so an orthopedic surgeon had to apply and remove patient casts. At another clinic, a shortage of clerks resulted in nurses’ assuming clerical duties—such as scheduling, admitting, and discharging patients—and physicians’ assuming tasks that nurses would otherwise have handled, such as escorting patients to the examination room. As physicians assumed duties that could more appropriately have been fulfilled by nonphysician personnel, the number of appointments that could have been scheduled each day might have been reduced. When one appointment is linked to or dependent on another, scheduling and staffing problems can further compound delays. For example, two chiefs of orthopedics told us that patients who are scheduled for an x-ray prior to their orthopedic appointment sometimes arrive late to the orthopedic clinic or without an x-ray as a result of delays in the x-ray clinic. Because orthopedic surgeons typically must have x-rays to properly assess the severity of a patient’s condition, patients who do not have x- rays often must reschedule their orthopedic appointments, wasting the original appointment and filling another future appointment slot on the schedule. While most of the clinics we visited continue to experience waiting times problems, several have reported success in reducing their waiting times— primarily by improving their scheduling processes or making better use of staff. One VA medical center combined these and other strategies, and as a result, all but one of its clinics that we reviewed had reduced their waiting times to less than 30 days. According to the chiefs of several clinics we visited, their improved waiting times were, in part, the result of their increasing the number of available patient appointments. To make more appointments available, these clinics reduced the number of no-shows and reduced physician involvement in certain services. Some clinics also added more providers. To increase the likelihood that patients would show up for their appointments, the clinics we visited used various strategies, such as the following. One ophthalmology and optometry clinic reduced its no-show rate from 45 percent to 22 percent by having scheduling clerks call patients a few days in advance to remind them of their appointments. When making these calls, clerks found that some patients had forgotten their appointments and would likely have missed them had they not received the reminder call. Some patients, however, said that they did not plan to keep the appointments. In these cases, clerks were typically able to schedule another patient into the time slot and thus increase the number of patients that the provider could see each day and thereby reduce the number of days it took veterans to get appointments. A primary care clinic at another medical center reduced its no-show rate from 22 percent to about 12 percent through two actions. First, it changed its medical resident rotation rate to once every 3 years, allowing patients to develop relationships with the residents assigned to their care. The chief of this clinic told us that she believes that the patients are more comfortable knowing that they will see the same provider on each visit and so are more likely to keep their appointments. Second, this primary care clinic also used open access scheduling—an IHI technique—to reduce its no-show rate. The basic premise of open access scheduling is to schedule nonurgent appointments within 30 days to reduce the likelihood that patients would miss their appointments. For those patients needing appointments past the 30-day time frame, the center sends reminder notices near the time the patient needs to call in to schedule the appointment. According to this center’s director of ambulatory care, lower no-show rates have helped to reduce patient waiting times for primary care. Further, to accommodate urgent patients who need same-day appointments, the medical center holds open the last two appointment slots for each provider in each clinic day. Clinics also freed up appointments by reducing provider involvement in services that do not require one-on-one physician-patient interaction. Providers in one medical center’s primary care clinic now use an automated telephone system to convey the results of blood and other lab tests to patients when the test results are normal. The system automatically calls patients and instructs them to call the system back and enter a preassigned password to retrieve messages from their providers about the results of their tests—which patients can access at any time. The gastroenterology clinic at another medical center initiated group education classes for patients diagnosed with hepatitis C. In these classes, patients can receive information and ask questions about the virus. A primary care clinic at another medical center developed an innovative approach to educating patients newly diagnosed with chronic diseases, such as diabetes. Once diagnosed, each patient is given a “prescription” to take to the clinic’s medical library, where the patient receives medical literature and other media on the disease. Providers at this clinic told us that patients who fill their library prescriptions know more about managing their own conditions and thus need less time with a physician. Chiefs of 12 clinics told us that they hired more providers—both physician and nonphysician—to increase the number of available appointments, thereby reducing waiting times. A urology clinic at one medical center hired a full-time urologist, and, according to the clinic’s chief, this action—along with others such as providing education seminars for primary care physicians—-helped reduce the clinic’s waiting time from over 1 year to 30 days, over a period of several years. Another urology clinic hired a full-time physician’s assistant to help in its general and procedure clinic. According to the chief of the clinic, clinic efficiency and the number of patients seen each day have increased because the physician’s assistant can independently see patients. One eye care clinic hired a part-time optometrist, which helped to reduce the waiting times for patients requiring nonurgent appointments. Some clinic chiefs told us that, through the use of referral guidelines, they were able to increase the number of available specialty appointments by reducing the number of scheduled patients whose medical needs could more appropriately be met by a primary care provider. Some clinics have computerized their referral guidelines, which provides easy access to the guidelines, expedites referrals, and helps ensure that needed tests and exams are completed in advance. Efficiencies such as these enable clinics to increase the number of daily appointments available and help reduce waiting times. Half of the 54 specialty care clinics we visited had referral guidelines for primary care providers to use when determining whether to refer a patient to a specialist. For example, an orthopedics clinic at one medical center we visited implemented referral guidelines in September 1999 to encourage orthopedists to refer patients back to primary care after their orthopedic needs have been met. Seventeen months after the guidelines were implemented, the clinic’s waiting times dropped from 200 days to 54 days. Referral guidelines also often indicate which laboratory tests need to be ordered by the primary care provider before a patient is referred to the specialist. According to the director of ambulatory care at another medical center—which established facilitywide referral guidelines—before the guidelines were implemented, primary care providers would notify specialists that patients were being referred. However, these referrals often did not include the primary care provider’s assessment of the patient’s condition. As a result, the specialists were required to spend time performing routine tests to assess patients’ conditions. This same medical center requires its primary care providers to use a computerized checklist program, which prompts them to complete specific steps for each referral to a specialist. The referral is then reviewed for completeness and accuracy by a medical center team, and, if it meets the criteria, is sent forward to the specialist within a 24-hour period or less. According to medical center officials, this process has greatly reduced the number of unnecessary patient referrals and has helped to make the time that specialists spend with patients more productive. While the use of patient referral guidelines at the sites we visited varied from clinic to clinic and from one medical center to another, many officials told us that clearly defined and strictly adhered-to guidelines would help reduce the number of specialty referrals for conditions that could more appropriately be handled by a primary care provider and would maximize the time that specialists spend with patients. Yet half of the 54 specialty clinics we visited did not have any form of referral guidelines, and waiting times at these clinics were 25 percent longer than those clinics that had referral guidelines. Some chiefs of specialty and primary care told us that while they believe that referral guidelines could help them better manage their workload and increase the number of available appointment slots, they did not have time to establish such guidelines. One medical center significantly reduced its waiting times by using multiple strategies, phased in over a 4-year period, that completely restructured its health care delivery system. According to the medical center’s director of ambulatory care, because of these changes, along with the hiring of a modest number of primary care providers, waiting times for primary care appointments have been reduced from an average of 35 days to an average of 20 days. In addition, waiting times for specialty care met the 30-day standard in all but one of the specialties we reviewed. For example, waiting times in urology were reduced from 3 months to 7 days, and waiting times in ophthalmology were reduced from more than a year to about 7 days. Before these strategies were implemented, the medical center operated under a “traditional” health care delivery model within VA—screening new patients in the emergency room and compensating for high no-show rates by overbooking appointments and allowing patients to walk in for care, regardless of the level of urgency. Based on information received during the VA-sponsored national collaborative with IHI, the medical center adopted several strategies to more effectively manage its patient workload. In addition to increasing available appointments and implementing referral guidelines, the medical center adopted three key features: the primary care model, walk-in triage, and centralized appointment scheduling. The primary care model. Almost all of the medical center’s nonurgent patient care workload was shifted into primary care. Until 4 years ago, none of the veterans seeking care at this medical center were assigned to a primary care provider; now, about 97 percent are. Primary care providers are now expected to provide comprehensive, ongoing medical care and preventive health measures. They are also expected to coordinate patients’ other health care needs, doing more diagnosis and treatment themselves before referring patients to specialists. For example, if a patient makes a request to see an orthopedist for a knee problem or a urologist for suspected prostate cancer, the primary care provider is expected to review the patient’s records, order and review the results of needed tests, refer the patient to a specialist only when needed, and oversee and coordinate the patient’s care. Walk-in triage. According to officials at the medical center, delivering nonurgent care on a walk-in basis (without a scheduled appointment)—a practice common at many VA medical centers—limits the number of appointments that can be scheduled because providers spend time on unscheduled walk-in patients instead of scheduled patients. They also said that treating walk-in patients is not in the best interest of the patients or providers because the treatment is episodic and lacks continuity of care; consequently, providers do not get to know the patients and are less involved in their overall health. The medical center now triages walk-in patients, with a nurse assessing them to determine whether they need emergency, urgent, or nonurgent care. If their conditions require care in the emergency room or urgent care clinic, they are seen immediately. However, if their conditions require nonurgent care, they are referred to a scheduling clerk, who schedules an appointment for them within 30 days. According to the medical center’s director of ambulatory care, this approach helps to better ensure that patients are seen in the appropriate setting, maximizing the delivery of primary care and ensuring that patients with urgent symptoms, such as blurred vision, loss of breath, or acute pain, still receive the most timely care possible. Centralized appointment scheduling. Prior to centralized scheduling, clerks in each of the medical center’s clinics scheduled patient appointments, resulting in a wide variety of scheduling practices. With centralized scheduling, patient appointments for all clinics are now scheduled from one administrative office. According to the center’s director of ambulatory care, implementing a centralized scheduling system also allowed the individual clinic clerks more time to focus on other functions, including patient intake at appointment time and patient discharge activities such as recording patient visit information into encounter forms. According to the center’s director of ambulatory care, implementing any of these strategies could result in reduced waiting times, but she believed that combining all of the strategies had the most significant effect. Although VA has set a performance goal for network directors and has contracted with IHI, it has generally relied on its medical centers nationwide to develop and implement strategies to reduce their own waiting times. However, clinic officials we talked to noted that more guidance and direction from VA on implementing and using referral guidelines could help them in their efforts to reduce waiting times. In addition, some chiefs of specialty and primary care clinics were unaware of the successes that other medical centers have had in reducing waiting times and told us that they would find such information useful in developing their own strategies. However, VA has not provided clinics with referral guidelines, nor has it assessed or disseminated ways to improve patient waiting times that have worked at some clinics. VA also lacks a systematic process for determining the causes of long waiting times, for monitoring clinics’ progress in reducing waiting times, and for helping those centers and clinics that continue to have long waiting times. Clinic officials told us that while IHI’s strategies for reducing waiting times have been useful, they could benefit from more guidance and direction from VA—including referral guidelines and information on best practices—to help them implement these strategies. In April 1998, VA established a requirement that all medical centers and community-based outpatient clinics adopt a primary care model—a system in which patients use primary care providers to manage their care. In implementing a primary care model, VA strongly suggested that its health care facilities establish guidelines for primary care providers to follow in deciding when to refer patients to specialty care. According to the chief of primary care at one medical center we visited, the center’s guidelines for referrals to urology and gastroenterology have resulted in improved communication between these specialists and primary care providers, fewer inappropriate referrals, more complete information on patients who have been referred, and ultimately shorter waiting times for patients in these two specialty clinics. However, the chief of primary care also told us that the medical center had not developed referral guidelines for the three other specialty care areas that we reviewed. Overall, we found that half of the 54 specialty care clinics we visited have implemented referral guidelines. Further, the existence and use of referral guidelines varied within a medical center and even within a specialty. For example, in one medical center, only the urology clinic had developed referral guidelines. In another medical center, referral guidelines were not available for two of the five specialty care areas that we reviewed. Several of the chiefs of primary and specialty care we spoke to indicated that implementing referral guidelines would help reduce the number of inappropriate referrals and the time specialists spend with patients, but they did not have the time to develop such guidelines and would like headquarters to do so. Although headquarters officials told us that they believe that providing minimum guidelines could serve as a framework for medical centers and clinics to build on and could help standardize the referral process, VA has not yet developed a national set of referral guidelines for its medical centers and clinics to use. Clinic officials also told us that they could benefit from learning about other clinics’ successes—especially those achieved through VA’s initial project with IHI. In July1999, IHI began working with 134 teams from various medical centers across the nation, representing 160 different clinics. Nine of the 10 medical centers we visited had teams that participated in the IHI project—including the medical center that had reduced waiting times by implementing a primary care model, referral guidelines, centralized appointment scheduling, and a system for triaging walk-ins. However, as of July 2001, none of the 134 teams’ findings have been summarized and publicized, leaving the medical centers and clinics nationwide to independently determine how to implement IHI’s strategies for reducing waiting times. In March 2001, VA entered into a second contract with IHI to identify and disseminate information on clinics’ best practices for reducing waiting times. According to an official from VA headquarters, this second contract should help VA communicate and share, nationwide, the results of medical centers and clinics that have had success in reducing waiting times. When VA established its 30-day waiting times standard for primary and specialty care over 5 years ago, it also established the objective that clinics meet this standard by 1998. However, until several months ago, VA had problems collecting accurate and reliable patient waiting times data. The deficiencies in the data limited its ability to identify clinics that were not meeting its 30-day timeliness standard. After several modifications to its national data collection software package, VA can now identify those clinics that exceed the 30-day standard systemwide. In September 1999, VA began holding its network directors responsible for meeting the 30-day waiting times standard for six clinic types. As of March 2001, VA data showed that about half of VA’s nearly 17,500 clinics for these six clinic types were meeting VA’s 30-day standard (see table 1). According to a headquarters’ official, VA is planning to notify, in several phases, clinics whose waiting times have not met the 30-day standard. VA has begun by notifying clinics whose waiting times exceed 120 days and, in the next phase, plans to notify clinics whose waiting times exceed 90 days. In March 2001, VA reported that 948 of its clinics had waiting times of 120 days or more in the six medical care areas that VA is using to measure VISN director performance. VA has also developed new waiting time performance objectives to be met by 2003: 90 percent of nonurgent primary care patients and 90 percent of patients with nonurgent specialty care referrals are to be seen within 30 days. However, VA has not developed an analytic framework for identifying root causes and tracking progress for solving these clinics’ waiting times problems. Consequently, over 8,700 clinics for the six areas in which waiting times are longer than 30 days are left to independently develop a process for identifying these root causes. Moreover, while VA distributed a report showing waiting times data to each of its networks, it did not require networks to develop corrective actions for medical centers and clinics that failed to meet the 30-day waiting times standard. As a result, VA cannot be sure that medical center management is making progress to meet this standard. Some of the 71 clinics in the 10 medical centers we visited have successfully begun to address their waiting times problems for patients— often by implementing IHI’s strategies—and several are meeting VA’s 30- day goal to provide nonurgent, outpatient primary and specialty care. However, many veterans continue to experience long waits for appointments, especially for certain types of care—despite VA’s initial objective to have its medical centers and clinics meet the 30-day standard by 1998. While VA’s two contracts with IHI are important first steps needed to expedite solving its waiting times problems systemwide, the Department could provide more guidance and direction to medical centers and clinics to reduce patient waiting times. In particular, VA has not established national referral guidelines—with local discretion, as appropriate—even though many centers and clinics told us that they need such guidelines but do not have the time to develop them. In addition, VA has not provided medical centers and clinics with an analytic framework for identifying the root causes of their long waiting times. Such a framework could greatly help those centers and clinics that need assistance. Until VA develops a systematic approach for identifying, analyzing, and monitoring waiting times problems, veterans will continue to be at risk of experiencing long waits in their access to nonurgent primary and specialty care. To help ensure that clinics meet VA’s 30-day waiting times standard, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following actions: Create a national set of referral guidelines for medical centers to use when referring patients from primary care to specialty care as well as guidelines for specialty clinics to follow in returning patients to primary care when they no longer need specialty care. Strengthen oversight by developing an agencywide process for determining the causes of waiting times problems; implementing corrective actions, where needed; and requiring periodic progress reports from clinics with long waiting times until they meet VA’s national standards. We provided VA a draft of our report for its review. In its comments, VA agreed with our findings and concurred with both of our recommendations (see appendix II). In response to our first recommendation, VA acknowledged the need to develop national referral guidelines for specialty care and has charged its newly formed National Waiting Time Steering Committee to address this issue. In response to our second recommendation, VA stated that its ongoing collaboration with IHI should provide an analytic roadmap for facilities to use in analyzing their waiting time problems. In addition, VA is working with IHI to develop a reporting instrument for clinics to use in monitoring waiting time progress. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies to the Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. We will make copies available to others upon request. Please contact me at (202) 512-7101 if you or your staffs have any questions. Another contact and key contributors to this report are listed in appendix III. To determine the extent to which clinics are meeting VA’s 30-day appointment standard for outpatient primary and specialty care and to learn about approaches some clinics have used to improve waiting times, we visited 10 medical centers selected to include a variety of different-size medical centers, with relatively high, medium, and low numbers of patient visits, located across the United States. The results of the site selection are reflected in figure 2. At these locations, we visited in total 71 clinics—17 primary care clinics and 54 clinics in five specialty areas: dermatology, gastroenterology, eye care (ophthalmology and optometry), orthopedics, and urology—within 10 VA medical centers. We selected these specialties, using data from VA’s national VISTA database for April, May and June 2000, because the data showed that these areas had some of the highest waiting times for scheduled outpatient clinic appointments compared to other VA specialty areas. During these site visits, we interviewed the medical center directors and chiefs of staff, when available, and clinic management and staff, including scheduling clerks and information resource managers. We also reviewed documents that these medical centers and clinics provided, such as examples of referral guidelines that primary care providers use before referring patients to specialists. We also spoke with VA headquarters officials. To identify VA’s efforts to help medical centers and clinics deliver timely care, we interviewed VA headquarters, medical center, and clinic officials and reviewed documents relating to VA’s past and current projects with IHI. We also reviewed VA’s Annual Performance Report Fiscal Year 2000 and other documents detailing VA’s goals to reduce its waiting times for appointments. To assess VA’s progress in improving the accuracy of waiting times data, we reviewed VA’s VISTA waiting time data for July 2000 through March 2001 and reviewed documentation of VA’s changes to the VISTA scheduling software, but we did not verify these data. We also interviewed chiefs of primary and specialty care clinics at the 10 medical centers we visited and obtained clinic waiting times data from these officials. Apart from data verification, we conducted our work from August 2000 through July 2001 in accordance with generally accepted government auditing standards. In addition to the contact named above, James Espinoza, Lisa Gardner, Sigrid McGinty, Karen Sloan, Bradley Terry, and Alan Wernz made key contributions to this report. Major Management Challenges and Program Risks: Department of Veterans Affairs (GAO-01-255, Jan. 1, 2001). Veterans’ Health Care: VA Needs Better Data on Extent and Causes of Waiting Times (GAO/HEHS-00-90, May 31, 2000). VA Health Care: Progress and Challenges in Providing Care to Veterans (GAO/T-HEHS-99-158, July 15, 1999). Veterans’ Affairs: Progress and Challenges in Transforming Health Care (GAO/T-HEHS-99-109, Apr. 15, 1999). VA Health Care: More Veterans Are Being Served, but Better Oversight Is Needed (GAO/HEHS-98-226, Aug. 28, 1998). VA Health Care: Status of Efforts to Improve Efficiency and Access (GAO/HEHS-98-48, Feb. 6, 1998). Veterans’ Health Care: Veterans’ Perceptions of VA Services and VA’s Role in Health Care Reform (GAO/HEHS-95-14, Dec. 23, 1994). VA Health Care: Restructuring Ambulatory Care System Would Improve Services to Veterans (GAO/HRD-94-4, Oct. 15, 1993). | The Department of Veterans Affairs (VA) runs one of the nation's largest health care systems. In fiscal year 2000, roughly four million patients made 39 million outpatient visits to more than 700 VA health care facilities nationwide. However, excessive waiting times for outpatient care have been a long-standing problem. To ensure timely access to care, VA established a goal that all nonurgent primary and specialty care appointments be scheduled within 30 days of request and that clinics meet this goal by 1998. Yet, three years later, reports of long waiting times persist. Waiting times at the clinics in the 10 medical centers GAO visited indicate that meeting VA's 30-day standard is a continuing challenge for many clinics. Although most of the primary care clinics GAO visited (15 of 17) reported meeting VA's standard for nonurgent, outpatient appointments, only one-third of the specialty care clinics visited (18 of 54) met VA's 30-day standard. For the remaining two-thirds, waiting times ranged from 33 days at one urology clinic to 282 days at an optometry clinic. Although two-thirds of the specialty clinics GAO visited continued to have long waiting times, some were making progress in reducing waiting times, primarily by improving their scheduling processes and making better use of their staff. These successes were often the result of medical centers' and clinics' working with the Institute for Healthcare Improvement (IHI)--a private contractor VA retained in July 1999--to develop strategies to reduce patient waiting times. Medical centers and clinics participating in VA's IHI project have received valuable information and strategies for successfully reducing waiting times. However, VA has not provided guidance to its medical centers on how to implement IHI strategies, and VA has only recently contracted with IHI to disseminate best practices agency-wide. VA has not developed other national guidance to help clinics reduce waiting times. Although clinics that did not have guidelines could have benefited from headquarters' assistance, VA has not established a national set of referral guidelines. Moreover, VA lacks an analytic framework for its medical centers and clinics to use in determining the root causes of lengthy waits. |
The Aviation and Transportation Security Act (ATSA) established TSA as the primary federal agency with responsibility for securing the nation’s civil aviation system. This responsibility includes the screening of all passengers and property transported from and within the United States by commercial passenger aircraft. In accordance with ATSA, all passengers, their accessible property, and their checked baggage are screened pursuant to TSA-established procedures at the more than 450 airports at which TSA performs, or oversees the performance of, security screening operations. These procedures generally provide, among other things, that passengers pass through security checkpoints where their person, identification documents, and accessible property, are checked by screening personnel. Since its implementation, in 2009, Secure Flight has changed from a program that identifies passengers as high risk solely by matching them against federal government watch lists—primarily the No Fly List, comprised of individuals who should be precluded from boarding an aircraft, and the Selectee List, composed of individuals who should receive enhanced screening at the passenger security checkpoint—to one that uses additional lists and risk-based criteria to assign passengers to a risk category: high risk, low risk, or unknown risk. In 2010, following the December 2009 attempted attack on a U.S.-bound flight, which exposed gaps in how agencies used watch lists to screen individuals, TSA began using risk-based criteria to create additional lists for Secure Flight screening. These lists are composed of high-risk passengers who may not be in the Terrorist Screening Database (TSDB), but who TSA has determined should be subject to enhanced screening procedures.Further, in 2011, TSA began screening passengers against additional identities in the TSDB that are not included on the No Fly or Selectee Lists. In addition, as part of TSA Pre✓™, a 2011 program through which TSA designates passengers as low risk for expedited screening, TSA began screening against several new lists of preapproved low-risk travelers. TSA also began conducting TSA Pre✓™ risk assessments, an activity distinct from matching against lists that uses the Secure Flight system to assign passengers scores based upon their travel-related data, for the purpose of identifying them as low risk for a specific flight. According to TSA officials, AIT systems, also referred to as full-body scanners, provide enhanced security benefits compared with those of walk-through metal detectors by identifying nonmetallic objects and liquids. Following the deployment of AIT, the public and others raised privacy concerns because AIT systems produced images of passengers’ bodies that image operators analyzed to identify objects or anomalies that could pose a threat to an aircraft or to the traveling public. To mitigate those concerns, TSA began installing automated target recognition (ATR) software on deployed AIT systems in July 2011.with ATR (AIT-ATR) automatically interpret the image and display anomalies on a generic outline of a passenger instead of displaying images of actual passenger bodies. Screening officers use the generic image of a passenger to identify and resolve anomalies on-site in the presence of the passenger. TSA Pre✓TM is intended to allow TSA to devote more time and resources at the airport to screening the passengers TSA determined to be higher or unknown risk, while providing expedited screening to those passengers determined to pose a lower risk to the aviation system. To assess whether a passenger is eligible for expedited screening, TSA considers, in general, (1) inclusion on an approved TSA Pre✓TM list of known travelers; (2) results from the automated TSA Pre✓TM risk assessments of all passengers; and (3) real-time threat assessments of passengers, known as Managed Inclusion, conducted at airport checkpoints. Managed Inclusion uses several layers of security, including procedures that randomly select passengers for expedited screening and a combination of behavior detection officers (BDO), who observe passengers to identify high-risk behaviors at TSA-regulated airports; passenger-screening canine teams; and explosives trace detection (ETD) devices to help ensure that passengers selected for expedited screening have not handled explosive material. TSA also shares responsibility with airports to vet airport workers to ensure they do not pose a security threat. Pursuant to TSA’s Aviation Workers program, TSA, in collaboration with airport operators and FBI, is to complete applicant background checks—known as security threat assessments—for airport facility workers, retail employees, and airline employees who apply for or are issued a credential for unescorted access to secure areas in U.S. airports. In September 2014, we reported on three issues affecting the effectiveness of TSA’s Secure Flight program—(1) the need for additional performance measures to capture progress toward Secure Flight program goals, (2) Secure Flight system matching errors, and (3) mistakes screening personnel have made in implementing Secure Flight at the screening checkpoint. TSA has taken steps to address these issues but additional action would improve the agency’s oversight of the Secure Flight program. Need for additional performance measures: In September 2014, we found that Secure Flight had established program goals that reflect new program functions since 2009 to identify additional types of high-risk and also low-risk passengers; however, the program performance measures in place at that time did not allow TSA to fully assess its progress toward achieving all of its goals. For example, one program goal was to accurately identify passengers on various watch lists. To assess performance toward this goal, Secure Flight collected various types of data, including the number of passengers TSA identifies as matches to high- and low-risk lists, but did not have measures to assess the extent of system matching errors—for example, the extent to which Secure Flight is missing passengers who are actual matches to these lists. We concluded that additional measures that address key performance aspects related to program goals, and that clearly identify the activities necessary to achieve goals, in accordance with the Government Performance and Results Act, would allow TSA to more fully assess progress toward its goals. Therefore, we recommended that TSA develop such measures, and ensure these measures clearly identify the activities necessary to achieve progress toward the goal. DHS concurred with our recommendation and, according to TSA officials, as of April 2015, TSA’s Office of Intelligence and Analysis was evaluating its current Secure Flight performance goals and measures and determining what new performance measures should be established to fully measure progress against program goals. Secure Flight system matching errors: In September 2014, we found that TSA lacked timely and reliable information on all known cases of Secure Flight system matching errors, meaning instances where Secure Flight did not identify passengers who were actual matches to these lists. TSA officials told us at the time of our review that when TSA receives information related to matching errors of the Secure Flight system, the Secure Flight Match Review Board reviews this information to determine if any actions could be taken to prevent similar errors from happening again. We identified instances in which the Match Review Board discussed system matching errors, investigated possible actions to address these errors, and implemented changes to strengthen system performance. However, we also found that TSA did not have readily available or complete information on the extent and causes of system matching errors. We recommended that TSA develop a mechanism to systematically document the number and causes of the Secure Flight system’s matching errors, in accordance with federal internal control standards. DHS concurred with our recommendation, and as of April 2015, TSA had developed such a mechanism. However, TSA has not yet demonstrated how it will use the information to improve the performance of the Secure Flight system. Mistakes at screening checkpoint: We also found in September 2014 that TSA had processes in place to implement Secure Flight screening determinations at airport checkpoints, but could take steps to enhance these processes. Screening personnel at passenger screening checkpoints are primarily responsible for ensuring that passengers receive a level of screening that corresponds to the level of risk determined by Secure Flight by verifying passengers’ identities and identifying passengers’ screening designations. To carry out this responsibility, among other steps, screening personnel are to confirm that the data included on the passenger’s boarding pass and in his or her identity document (such as a driver’s license) match one another, and review the passenger’s boarding pass to identify his or her Secure Flight passenger screening determination. TSA information from May 2012 through February 2014 that we assessed indicates that screening personnel made errors at the checkpoint in screening passengers consistent with their Secure Flight determinations. TSA officials at five of the nine airports where we conducted interviews stated they conducted after-action reviews of such screening errors and used these reviews to take action to address the root causes of those errors. However, we found that TSA did not have a systematic process for evaluating the root causes of these screening errors across airports, which could allow TSA to identify trends across airports and target nationwide efforts to address these issues. Officials with TSA’s Office of Security Operations told us in the course of our September 2014 review that evaluating the root causes of screening errors would be helpful and stated they were in the early stages of forming a group to discuss these errors. However, TSA was not able to provide documentation of the group’s membership, purpose, goals, time frames, or methodology. Therefore, we recommended in September 2014 that TSA develop a process for evaluating the root causes of screening errors at the checkpoint and then implement corrective measures to address those causes. DHS concurred with our recommendations and has developed a process for collecting and evaluating data on the root causes of screening errors. However, as of April 2015, TSA had not yet shown that the agency has implemented corrective measures to address the root causes. In March 2014, we reported that, according to TSA officials, checkpoint security is a function of technology, people, and the processes that govern them, however we found that TSA did not include each of those factors in determining overall AIT-ATR system performance. Specifically, we found that TSA evaluated the technology’s performance in the laboratory to determine system effectiveness. However, laboratory test results provide important insights but do not accurately reflect how well the technology will perform in the field with actual human operators. Additionally, we found that TSA did not assess how alarms are resolved by considering how the technology, people, and processes function collectively as an entire system when determining AIT-ATR system performance. AIT-ATR system effectiveness relies on both the technology’s capability to identify threat items and its operators to resolve those threat items. At the time of our review, TSA officials agreed that it is important to analyze performance by including an evaluation of the technology, operators, and processes, and stated that TSA was planning to assess the performance of all layers of security. According to TSA, the agency conducted operational tests on the AIT-ATR system, as well as follow-on operational tests as requested by DHS’s Director of Operational Test and Evaluation, but those tests were not ultimately used to assess effectiveness of the operators’ ability to resolve alarms, as stated in DHS’s Director of Operational Test and Evaluation’s letter of assessment on the technology. Transportation Security Laboratory officials also agreed that qualification testing conducted in a laboratory setting is not always predictive of actual performance at detecting threat items. Further, laboratory testing does not evaluate the performance of screening officers in resolving anomalies identified by the AIT-ATR system or TSA’s current processes or deployment strategies. Given that TSA was seeking to procure the second generation of AIT systems, known as AIT-2, we reported that DHS and TSA would be hampered in their ability to ensure that future AIT systems meet mission needs and perform as intended at airports unless TSA evaluated system effectiveness based on both the performance of the AIT-2 technology and screening officers who operate the technology. We recommended that TSA measure system effectiveness based on the performance of the AIT- 2 technology and screening officers who operate the technology while taking into account current processes and deployment strategies. TSA concurred and reported taking steps to address this recommendation. Specifically, in January 2015, DHS stated that TSA’s Office of Security Capabilities evaluated the AIT-2 technology and screening officer as a system during an operational evaluation. However, TSA has not yet provided sufficient documentation showing that this recommendation has been fully addressed. In December 2014, we reported that, according to TSA officials, TSA tested the security effectiveness of the individual components of the Managed Inclusion process—such as BDOs and ETD devices—before implementing Managed Inclusion, and TSA determined that each layer alone provides an effective level of security. However, in our prior body of work, we identified challenges in several of the layers used in the Managed Inclusion process, raising questions regarding their effectiveness. For example, in our November 2013 report on TSA’s behavior detection and analysis program, we found that although TSA had taken several positive steps to validate the scientific basis and strengthen program management of its behavior detection and analysis program, TSA had not demonstrated that behavioral indicators can be used to reliably and effectively identify passengers who may pose a threat to aviation security. Further, TSA officials stated that they had not yet tested the security effectiveness of the Managed Inclusion process as it functions as a whole, as TSA had been planning for such testing over the course of the last year. TSA documentation showed that the Office of Security Capabilities recommended in January 2013 that TSA test the security effectiveness of Managed Inclusion as a system. We reported in December 2014 that according to officials, TSA anticipated that testing would begin in October 2014 and estimated that testing could take 12 to 18 months to complete. We have also previously reported on challenges TSA has faced in designing studies and protocols to test the effectiveness of security systems and programs in accordance with established methodological practices, such as in the case of the AIT systems discussed previously and in our evaluation of BDO effectiveness. In our December 2014 report, we concluded that ensuring the planned effectiveness testing of the Managed Inclusion process adheres to established evaluation design practices would help TSA provide reasonable assurance that the effectiveness testing will yield reliable results. In general, evaluations are most likely to be successful when key steps are addressed during design, including defining research questions appropriate to the scope of the evaluation, and selecting appropriate measures and study approaches that will permit valid conclusions. As a result, we recommended that to ensure TSA’s planned testing yields reliable results, the TSA Administrator take steps to ensure that TSA’s planned effectiveness testing of the Managed Inclusion process adheres to established evaluation design practices. DHS concurred with our recommendation and began taking steps toward this goal. Specifically, DHS stated that TSA plans to use a test and evaluation process—which calls for the preparation of test and evaluation framework documents including plans, analyses, and a final report describing the test results— for its planned effectiveness testing of Managed Inclusion. In December 2011, we found that, according to TSA, limitations in its criminal history checks increased the risk that the agency was not detecting potentially disqualifying criminal offenses as part of its Aviation Workers security threat assessments for airport workers. Specifically, we reported that TSA’s level of access to criminal history record information in the FBI’s Interstate Identification Index excluded access to many state records such as information regarding sentencing, release dates, and probation or parole violations, among others. As a result, TSA reported that its ability to look into applicant criminal history records was often incomplete. We recommended that the TSA and the FBI jointly assess the extent to which this limitation may pose a security risk, identify alternatives to address any risks, and assess the costs and benefits of pursuing each alternative. TSA and the FBI have since taken steps to address this recommendation. For example, in 2014, the agencies evaluated the extent of any risk and, according to TSA and FBI officials, concluded that the risk of incomplete information did exist and could be mitigated through expanded access to state-supplied records. TSA officials reported that the FBI has since taken steps to expand the criminal history record information available to TSA when conducting its security threat assessments for airport workers and others. Chairman Johnson, Ranking Member Carper, and members of the committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For questions about this statement, please contact Jennifer Grover at (202) 512-7141 or groverj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Maria Strudwick (Assistant Director), Claudia Becker, Michele Fejfar, and Tom Lombardi. Key contributors for the previous work that this testimony is based on are listed in each product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Since the attacks of September 11, 2001 exposed vulnerabilities in the nation's aviation system, billions of dollars have been spent on a wide range of programs designed to enhance aviation security. Securing commercial aviation remains a daunting task, and continuing fiscal pressure highlights the need for TSA to determine how to allocate its finite resources for the greatest impact. GAO previously reported on TSA's oversight of its aviation security programs, including the extent to which TSA has the information needed to assess the programs. This testimony focuses on TSA's oversight of aviation security measures including, among other things (1) Secure Flight, (2) Advanced Imaging Technology, and (3) Managed Inclusion. This statement is based on reports and testimonies issued from December 2011 through May 2015. For prior work, GAO analyzed TSA documents and interviewed TSA officials, among other things. The Transportation Security Administration (TSA) has taken steps to improve oversight of Secure Flight—a passenger prescreening program that matches passenger information against watch lists and assigns each passenger a risk category—but could take further action to address screening errors. In September 2014, GAO reported that TSA lacked timely and reliable information on system matching errors—instances where Secure Flight did not identify passengers who were actual matches to watch lists. GAO recommended that TSA systematically document such errors to help TSA determine if actions can be taken to prevent similar errors from occurring. The Department of Homeland Security (DHS) concurred and has developed a mechanism to do so, but has not yet shown how it will use this information to improve system performance. In September 2014, GAO also found that screening personnel made errors in screening passengers at the checkpoint at a level consistent with their Secure Flight risk determinations and that TSA did not have a systematic process for evaluating the root causes of these errors across airports. GAO recommended that TSA develop a process for evaluating the root causes and implement corrective measures to address them. DHS concurred and has developed such a process but has not yet demonstrated implementation of corrective measures. In March 2014, GAO found that TSA performance assessments of certain full-body scanners used to screen passengers at airports did not account for all factors affecting the systems. GAO reported that the effectiveness of Advanced Imaging Technology (AIT) systems equipped with automated target recognition software (AIT-ATR)—which displays anomalies on a generic passenger outline instead of actual passenger bodies—relied on both the technology's capability to identify potential threat items and its operators' ability to resolve them. However, GAO found that TSA did not include these factors in determining overall AIT-ATR system performance. GAO also found that TSA evaluated the technology's performance in the laboratory—a practice that does not reflect how well the technology will perform with actual human operators. In considering procurement of the next generation of AIT systems (AIT-2), GAO recommended that TSA measure system effectiveness based on the performance of both the technology and the screening personnel. DHS concurred and in January 2015 reported that it has evaluated the AIT-2 technology and screening personnel as a system but has not yet provided sufficient documentation of this effort. In December 2014, GAO found that TSA had not tested the effectiveness of its overall Managed Inclusion process—a process to assess passenger risk in real time at the airport and provide expedited screening to certain passengers—but had plans to do so. Specifically, GAO found that TSA had tested the effectiveness of individual components of the Managed Inclusion process, such as canine teams, but had not yet tested the effectiveness of the overall process. TSA officials stated that they had plans to conduct such testing. Given that GAO has previously reported on TSA challenges testing the effectiveness of its security programs, GAO recommended that TSA ensure its planned testing of the Managed Inclusion process adhere to established evaluation design practices. DHS concurred and has plans to use a test and evaluation process for its planned testing of Managed Inclusion. GAO has previously made recommendations to DHS to strengthen TSA's oversight of aviation security programs. DHS generally agreed and has actions underway to address them. Consequently, GAO is not making any new recommendations in this testimony. |
Based on state responses to our survey, we estimated that nearly 617,000, or about 89 percent of the approximately 693,000 regulated tanks states manage, had been upgraded with the federally required equipment by the end of fiscal year 2000. In comparison, EPA data at that time showed that about 70 percent of the total number of tanks its regions regulate on tribal lands had been upgraded, but the accuracy of this data varied among the regions. For example, one region reported that it had no information on the actual location of some of the 300 tanks it was supposed to regulate and therefore could not verify whether these tanks had been upgraded. Even though most tanks have been upgraded, we estimated from our survey data that more than 200,000 of them, or about 29 percent, were not being properly operated and maintained, increasing the risk of leaks. EPA’s most current program data from the end of fiscal year 2002 show that these conditions have not changed significantly; tank compliance rates range from an estimated 19 to 26 percent. However, program managers estimate these rates are too high because some states have not inspected all tanks or reported their data in a consistent manner. The extent of operational and maintenance problems we identified at the time of our survey varied across the states, as figure 1 illustrates. Some upgraded tanks also continue to leak, in part because of operational and maintenance problems. For example, in fiscal year 2000, EPA and the states confirmed a total of more than 14,500 leaks or releases from regulated tanks, with some portion coming from upgraded tanks. EPA’s most recent data show that the agency and states have been able to reduce the rate of new leaks by more than 50 percent over the past 3 years. The states reported a variety of operational and maintenance problems, such as operators turning off leak detection equipment. The states also reported that the majority of problems occurred at tanks owned by small, independent businesses; non-retail and commercial companies, such as cab companies; and local governments. The states attributed these problems to a lack of training for tank owners, installers, operators, removers, and inspectors. These smaller businesses and local government operations may find it more difficult to afford adequate training, especially given the high turnover rates among tank staff, or may give training a lower priority. Almost all of the states reported a need for additional resources to keep their own inspectors and program staff trained, and 41 states requested additional technical assistance from the federal government to provide such training. EPA has provided states with a number of training sessions and helpful tools, such as operation and maintenance checklists and guidelines. According to program managers, the agency recognizes that many states, because of their tight budgets, are looking for cost-effective ways of providing training, such as Internet-based training. To expand on these efforts, we recommended that EPA regions work with their states to identify training gaps and develop strategies to fill these gaps. In addition, we suggested that the Congress consider increasing the amount of funds it provides from the trust fund and authorizing states to spend a limited portion on training. According to EPA’s program managers, only physical inspections can confirm whether tanks have been upgraded and are being properly operated and maintained. However, at the time of our survey, only 19 states physically inspected all of their tanks at least once every 3 years— the minimum that EPA considers necessary for effective tank monitoring. Another 10 states inspected all tanks, but less frequently. The remaining 22 states did not inspect all tanks, but instead generally targeted inspections to potentially problematic tanks, such as those close to drinking water sources. In addition, one of the three EPA regions that we visited did not inspect tanks located on tribal land at this rate. According to EPA program managers, limited resources have prevented states from increasing their inspection activities. Officials in 40 states said that they would support a federal mandate requiring states to periodically inspect all tanks, in part because they expect that such a mandate would provide them needed leverage to obtain the requisite inspection staff and funding from their legislatures. Figure 2 illustrates the inspection practices states reported to us in our survey. While EPA has not established any required rate of inspections, it has been encouraging states to consider other ways to increase their rate of inspections, for example by using third-party inspectors, and a few have been able to do so. However, to obtain more consistent coverage nationwide, we suggested that the Congress establish a federal requirement for the physical inspections of all tanks on a periodic basis, and provide states authority to spend trust fund appropriations on inspection activities as a means to help states address any staff or resource limitations. In addition to more frequent inspections, a number of states said that they needed additional enforcement tools to correct problem tanks. As figure 3 illustrates, at the time of our survey, 27 states reported that they did not have the authority to prohibit suppliers from delivering fuel to stations with problem tanks, one of the most effective tools to ensure compliance. According to EPA program managers, this number has not changed. EPA believes, and we agree, that the law governing the tank program does not give the agency clear authority to regulate fuel suppliers and therefore prohibit their deliveries. As a result, we suggested that the Congress consider (1) authorizing EPA to prohibit delivery of fuel to tanks that do not comply with federal requirements, (2) establishing a federal requirement that states have similar authority, and (3) authorizing states to spend limited portions of their trust fund appropriations on enforcement activities. At the end of fiscal year 2002, EPA and states had completed cleanups of about 67 percent (284,602) of the 427,307 known releases at tank sites. Because states typically set priorities for their cleanups by first addressing those releases that pose the most risks, states may have already begun to clean up some of the worst releases to date. However, states still have to ensure that ongoing cleanups are completed for another 23 percent (99,427) and that cleanups are initiated at a backlog of 43,278 sites. EPA has also established a national goal of completing 18,000 to 23,000 cleanups each year through 2007. However, in addition to their known workload, states may likely face a potentially large but unknown future cleanup workload for several reasons: (1) as many as 200,000 tanks may be unregistered or abandoned and not assessed for leaks, according to an EPA estimate; (2) tens of thousands of empty and inactive tanks have not been permanently closed or had leaks identified; and (3) some states are reopening completed cleanups in locations where MTBE was subsequently detected. This increasing workload poses financial challenges for some states. In the June 2002 Vermont survey of state funding programs, nine states said they did not have adequate funding to cover their current program costs, let alone unanticipated future costs. For example, while tank owners and operators have the financial responsibility for cleaning up contamination from their tanks, there are no financially viable parties responsible for the abandoned tanks that states have not yet addressed. In addition, MTBE is being detected nationwide and its cleanup is costly. States reported that it could cost more to test for MTBE because additional steps are needed to ensure the contamination is not migrating farther than other contaminants, and MTBE can cause longer plumes of contamination, adding time and costs to cleanups. If there are no financially viable parties responsible for these cleanups, states may have to assume more of these costs. | Nationwide, underground storage tanks (UST) containing petroleum and other hazardous substances are leaking, thereby contaminating the soil and water, and posing health risks. The Environmental Protection Agency (EPA), which implements the UST program with the states, required tank owners to install leak detection and prevention equipment by the end of 1993 and 1998 respectively. The Congress asked GAO to determine to what extent (1) tanks comply with the requirements, (2) EPA and the states are inspecting tanks and enforcing requirements, (3) upgraded tanks still leak, and (4) EPA and states are cleaning up these leaks. In response, GAO conducted a survey of all states in 2000 and issued a report on its findings in May 2001. This testimony is based on that report, as well as updated information on program performance since that time. GAO estimated in its May 2001 report that 89 percent of the 693,107 tanks subject to UST rules had the leak prevention and detection equipment installed, but that more than 200,000 tanks were not being operated and maintained properly, increasing the chance of leaks. States responding to our survey also reported that because of such problems, even tanks with the new equipment continued to leak. EPA and the states attributed these problems primarily to poorly trained staff. While EPA is working with states to identify additional training options, in December 2002, EPA reported that at least 19 to 26 percent of tanks still have problems. EPA and states do not know how many upgraded tanks still leak because they do not physically inspect all tanks. EPA recommends that tanks be inspected once every 3 years, but more than half of the states do not do this. In addition, more than half of the states lack the authority to prohibit fuel deliveries to problem tanks--one of the most effective ways to enforce compliance. States said they did not have the funds, staff, or authority to inspect more tanks or more strongly enforce compliance. As of September 2002, EPA and states still had to ensure completion of cleanups for about 99,427 leaks, and initiation of cleanups at about another 43,278. States also face potentially large, but unknown, future workloads in addressing leaks from abandoned and unidentified tanks. Some states said that their current program costs exceed available funds, so states may seek additional federal support to help address this future workload. |
BLM’s mission is to manage public lands and resources to best serve the needs of the American people. The Bureau, which is part of the Department of the Interior (DOI), has 210 state, district, and resource area offices that manage about 270 million acres of public lands located in 28 states, primarily in the West and Alaska (see figure 1). BLM’s offices also manage another 300 million acres of subsurface mineral resources that underlie lands administered by other government agencies or are owned by private interests. BLM’s fiscal year 1995 appropriation totaled $1.24 billion. In fulfilling its mission, BLM develops land-use plans to balance multiple uses and competing demands, including ecosystem management, timber harvesting, mining, oil and gas production, watershed management, wildlife management, and recreation. It also designates and maintains land of critical environmental concern and is responsible for a major section of the National Spatial Data Infrastructure. In performing these functions, BLM maintains over 1 billion documents, including land surveys and surveyor notes, tract books, land patents, mining claims, oil and gas leases, and land and mineral case files. According to BLM, many of these paper documents are deteriorating, and some are illegible. Most of the documents are manually maintained and stored in a number of locations, although some have been entered into various databases since the 1970s. During the early 1980s, BLM found it could not handle the case processing workload associated with a peak in the number of applications for oil and gas leases. BLM recognized that to keep up with the increased demand it needed to automate its manual records and case processing activities. Thus, the Bureau began planning to acquire an automated land and mineral case processing system (ALMRS). At that time, BLM estimated the life-cycle cost of such a case processing system would be about $240 million. In 1988, BLM expanded the scope of ALMRS to include a land information system (LIS). This system was to provide automated information systems and geographic information systems technology (GIS) support for other land management functions, such as land use and resource planning. BLM then combined the LIS with a project to modernize the Bureau’s computer and telecommunications equipment. BLM estimated the total life-cycle cost of this combined project to be $880 million. According to DOI and ALMRS project officials, the Office of Management and Budget (OMB) directed BLM to scale down the combined project in 1989 because of the projected high cost. The project, which was renamed ALMRS/Modernization, was reduced to three major components—the ALMRS Initial Operating Capability (ALMRS IOC), Geographic Coordinate Data Base (GCDB), and modernization of BLM’s computer and telecommunications infrastructure and rehost of selected management and administrative systems. Estimated life-cycle costs were cut to $575 million. In 1993, BLM reduced the ALMRS/Modernization 10-year life-cycle cost estimate from $575 million to $403 million, after the system development and deployment contract was awarded at a lower cost than had been anticipated. BLM has designated the ALMRS/Modernization project as a mission-critical system to (1) automate land and mineral records and case processing activities and (2) provide information to support land and resource management activities. The project is a large-scale effort that is expected to provide an efficient means to record, maintain, and retrieve land description, ownership, and use information to support BLM, other federal programs, and interested parties. It is to accomplish this by (1) establishing a common information technology platform,(2) increasing public access to BLM records through the Internet, (3) integrating multiple databases into a single geographically referenced database, (4) shortening the time to complete case processing activities, and (5) replacing costly manual records with automated records. Appendix II provides an overview of the planned ALMRS/Modernization architecture. As noted above, the ALMRS/Modernization consists of three components—ALMRS IOC, GCDB, and technology modernization and rehost of selected systems. The ALMRS IOC component is to provide (1) support for case processing activities, including recording valid mining claims, processing mineral patents, and granting rights-of-way for roads and power corridors and (2) information for land and resource management activities, including timber sales and grazing leases. The GCDB component is the database that will contain geographic coordinates and survey information for land parcels. Other databases, such as those containing land and mineral records, will be integrated with GCDB. The information technology modernization and rehost component consists of installing computer and telecommunications equipment and converting selected management and administrative systems to a relational database system that will be used throughout the Bureau. Between fiscal years 1983 and 1995, about $296.2 million had been appropriated for ALMRS/Modernization. According to project officials, obligations for ALMRS/Modernization totaled $262.8 million from 1983 through April 30, 1995. They expect obligations to equal appropriations by September 30, 1995. In 1993, OMB and BLM agreed to annual funding limits for ALMRS/Modernization through fiscal year 2002. As agreed, total spending was not to exceed $403 million for fiscal years 1983 through 2002. However, to stay within the limit for fiscal year 1995, BLM delayed the initial hardware installation for the Alaska and Wyoming state, district, and resource area offices. Also, BLM estimates that it will exceed the fiscal year 1996 limit of $69.5 million by $25.2 million. BLM expects to obtain the $25.2 million from other parts of its operations. According to ALMRS/Modernization project officials, the increase is attributable to several factors, but primarily because of requirements that were added after contract award. These requirements include system engineering studies for system architecture and system security issues, a requirement to integrate BLM’s remaining older personal computers and local area networks with the new ALMRS/Modernization systems, changes to more easily accommodate land record automation requirements of other Interior bureaus and federal agencies, and more training for users and technical staff. In addition, the ALMRS/Modernization project office now believes that operations and maintenance costs in fiscal years 1997 through 2002 will be more than the OMB and BLM funding agreement for that category. BLM is currently working on a new operations and maintenance estimate. BLM has completed most of the initial installation of computer and telecommunications equipment and has met most of its ALMRS IOC, GCDB, and rehost milestones thus far. As the ALMRS IOC development nears completion over the next several months, tasks will become more complex as the system is integrated and tested. BLM has taken action to maintain its tight development schedule, but slippages could still occur because there is little schedule time available to correct unanticipated problems. Also, BLM has recently taken action to obtain an independent assessment of the ALMRS IOC to help ensure that its requirements are met. BLM has been meeting most of its schedule milestones for the initial installation of ALMRS IOC and modernization computer and telecommunications hardware. Thus far, BLM has installed (1) a mix of ALMRS IOC, office automation, E-mail, GIS servers, and telecommunications equipment primarily in eight state offices and their subordinate district and resource area offices and (2) about 4,400 of the planned 6,073 workstations in these offices. The Bureau plans to install 730 more workstations and other equipment in fiscal year 1995 at the Idaho and Utah state offices, their subordinate offices, and a support office. However, initial hardware installation for Alaska and Wyoming state and subordinate offices has been delayed because of a shortage of hardware funds in fiscal year 1995, according to ALMRS/Modernization project officials. BLM recently rescheduled the installation of servers and 951 workstations for these locations to fiscal year 1996. The collection and validation of land and mineral data for ALMRS IOC are on schedule for all ten state offices. The land and mineral data files are to be converted to INFORMIX after the installation and testing of final hardware upgrades and ALMRS IOC software. The development of ALMRS IOC software, which BLM divided into three phases or “builds,” is currently on schedule. Build 1, which consists of about 46,000 lines of code, was developed and successfully tested on time. BLM and the prime contractor have been working on about 124,000 lines of code for build 2. They expect to complete the software integration test for build 2 on September 12, 1995. BLM and the prime contractor estimate that about 120,000 lines of code will be developed in build 3 to complete the ALMRS IOC software. The software produced in builds 1, 2, and 3 will be integrated to form ALMRS IOC. As to the GCDB component, nine state offices are meeting or are ahead of the data collection milestones set in 1993. One state office, Montana, is behind schedule. The final test of the software to convert existing data files to INFORMIX is scheduled to be completed by January 12, 1996. BLM plans to convert the GCDB data files when ALMRS IOC is deployed in each state office. Finally, the administrative systems rehost effort is on schedule with all 13 of the planned software applications and related databases converted from COBOL to INFORMIX. Three of these applications have been rehosted to the ALMRS/ Modernization equipment and are operational, one is in the process of being rehosted, six have been tested and accepted and will be rehosted, and three have undergone testing and are expected to be accepted soon. According to the Deputy Project Manager, BLM plans to update the systems before deploying them to satisfy users’ change requests that were held in abeyance while the systems were being converted to INFORMIX. Figure 2 shows future milestones for the software integration tests of builds 2 and 3, qualification test for ALMRS IOC (functionality and integration), acceptance of ALMRS IOC, and final installation of ALMRS IOC hardware upgrades and software. As the ALMRS/Modernization nears the final testing and implementation stages, the project work will become more complex and the schedule more demanding. The final tests will include assessing the ALMRS IOC software to determine whether it meets design specifications, software units properly interface with other units, software responds correctly and consistently to users, and hardware and software operate as expected at pilot sites and under various levels of workload. As with all development efforts, the actual performance of the new software systems will not be known until they are completed, fully tested, and deployed. Developing realistic project schedules is critical to managing the successful development of large software systems. The General Services Administration has found that setting realistic project schedules is one of the ten most important factors in successfully developing large, complex federal computer systems. ALMRS/Modernization project officials and an Interior Senior Technical Analyst stated that the milestones were not based on an assessment of the time and resources needed, but instead were based on the need to complete the project by the end of fiscal year 1996—the deadline established in the OMB and BLM agreement. Nevertheless, project officials said they have been committed to completing the development and deployment of ALMRS as scheduled. Our analysis of the project schedule showed that several critical milestones are very close together with little recovery time available to deal with unanticipated problems that may be encountered. Therefore, slippages in the ALMRS/Modernization development and testing schedule could occur and impact project cost and completion plans. Similarly, slippages in the deployment of ALMRS IOC and database conversions could also impact project costs and completion plans because of the short installation periods scheduled for each state. As shown in table 1, BLM was allowing only 15 to 20 working days to perform the final installation of ALMRS IOC and convert databases in each state. ALMRS/Modernization project officials and an Interior Senior Technical Analyst agreed that both the development and testing milestones and deployment and database conversion milestones are very tight with little tolerance for slippages. Interior and BLM have been taking a number of actions to closely monitor the project status and schedule to avoid slippages. Interior Information Resources Management (IRM) officials have been conducting periodic oversight reviews and have required project officials to address project schedule issues. BLM has also established a consolidated project schedule that includes BLM’s and the prime contractor’s tasks to estimate and monitor the entire project schedule. Finally, BLM advanced the date for the software integration test for build 2 to provide additional time to deal with any unexpected problems. BLM recently revised the installation schedule because of an anticipated reduction in funding for fiscal year 1996. Specifically, the Bureau rescheduled the final ALMRS IOC installation and database conversions from fiscal year 1996 to 1997. Verification and validation of software is widely accepted and advocated by Federal Information Processing Standards Publication 132. Verification and validation is a formal process to assess the products of each system’s life-cycle phase, including concept, requirements, design, testing, implementation and installation, and operations and maintenance. Typically, the assessments are performed by someone not involved in developing the software to help ensure that the software meets the organization’s requirements, that software development and maintenance costs will not escalate unexpectedly, and that software quality is acceptable. Recently, project officials decided to obtain an independent verification and validation of ALMRS IOC software in response to direction from the House Committee on Appropriations. This action should help ensure that the software meets BLM’s stated requirements and provides the support expected from this mission-critical system. Stress testing automated systems before deploying them is a common industry practice. Such testing is done to ensure that the entire system will successfully process workloads expected during peak operating periods and determine the point at which major system resources (e.g., servers, workstations, storage devices, and local and wide area networks) will be exhausted. BLM plans to perform a 30-day acceptance test of the ALMRS IOC at pilot sites to assess functionality and performance in an operational setting. During this period, BLM also plans to stress test the ALMRS IOC (i.e., state and district office ALMRS IOC servers, terminals, and workstations) in a network environment. If ALMRS IOC performs successfully at the end of the test, BLM will accept and install it throughout all of its offices. However, BLM’s stress-test plans cover only the ALMRS IOC. The plans do not examine how the entire ALMRS/Modernization—including ALMRS IOC, office automation, E-mail, administrative systems, and various departmental, state, and district applications in a network environment—will perform under peak workload conditions. While ALMRS IOC is the largest and most significant component in the initial deployment of BLM’s modernization effort, other systems and applications are expected to place considerable demand on the ALMRS/Modernization computer systems and communications networks. By limiting the stress testing to ALMRS IOC, BLM will deploy the ALMRS/Modernization nationwide without knowing whether it can perform as intended during peak workloads. To date, the Bureau has been completing most of the project tasks according to the schedule milestones established in 1993. However, the project schedule could slip because there is little time available to deal with unexpected problems. Further, over the next several months, BLM and the prime contractor will be working on the more difficult tasks of completing, integrating, and testing ALMRS IOC. BLM’s recent action to obtain independent verification and validation of ALMRS IOC software should help ensure that BLM’s requirements are met. However, the Bureau’s plan to stress test the ALMRS IOC portion of the modernized system is not sufficient. Stress testing only a portion of the modernized system will not provide assurance that all of the systems and technology to be deployed can successfully process the workloads expected during peak operating periods. We recommend that the Director, BLM, ensure that the entire ALMRS/Modernization is thoroughly stress tested before it is deployed throughout the Bureau. In commenting on a draft of this report, BLM stated that it agreed with our conclusions and recommendation. The Bureau said it now plans to stress test the entire ALMRS/ Modernization to ensure that all systems and technology can process the workloads expected during peak operating conditions. As previously noted, the Bureau said it has contracted for an independent verification and validation of the ALMRS IOC software in response to direction by the House Committee on Appropriations to perform a verification and validation test. BLM also suggested some clarifications and provided additional information for our report. We have incorporated these suggestions and information as appropriate. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will provide copies to the Secretary of the Interior; the Director, Bureau of Land Management; the Director, Office of Management and Budget; and interested congressional committees. We will also make copies available to others upon request. Please call me at (202) 512-6253 if you or your staff have any questions concerning this report. Other major contributors are listed in appendix III. To ascertain BLM’s progress in developing and implementing the ALMRS/Modernization, we reviewed ALMRS/Modernization project documents, DOI reports, a Department of the Treasury report, BLM studies on ALMRS/Modernization project development, General Services Administration IRM publications, Federal Information Processing Standards Publication 132, OMB Circular A-130, and GAO reports on large-scale systems development projects. We also attended departmental project reviews at the ALMRS/Modernization project office in Lakewood, Colorado, and reviewed the minutes of four prior project reviews. We discussed the planned capabilities of the system, technical complexity, and development progress with prime contractor officials, a DOI Senior Technical Analyst, and ALMRS/Modernization project officials responsible for systems engineering, software development, and project management. We also discussed with ALMRS/Modernization project officials and BLM Headquarters officials the planning and development history of ALMRS/Modernization, testing plans, and efforts to follow industry practices. We analyzed project milestones against current progress, and reviewed the remaining tasks for their complexity. We reviewed and analyzed ALMRS/Modernization project estimates and fiscal year 1996 budget justifications and documentation. We also compared BLM’s fiscal year 1996 budget request for the ALMRS/Modernization with its cost estimate for fiscal year 1996. We reviewed BLM’s options paper for ALMRS/Modernization operations and maintenance funding through fiscal year 2001 and discussed it with the ALMRS/Modernization Deputy Project Manager and the project budget analyst. We interviewed ALMRS/Modernization project officials and a Department Senior Technical Analyst on ALMRS/Modernization total project budget and milestones. Budget estimates were collected from the ALMRS/Modernization Deputy Project Manager, budget analysts, and other BLM Headquarters representatives. These estimates were confirmed by the Department’s IRM office; however, we did not independently verify the accuracy of the estimates. Our work was performed between March 1995 and August 1995, in accordance with generally accepted government auditing standards. We performed our work at the Department’s IRM headquarters and BLM headquarters in Washington, D.C., and at the ALMRS/Modernization Project Office and prime contractor’s office in Lakewood, Colorado. We requested comments on a draft of this report from the Director, Bureau of Land Management. In response, we received comments from the Chief, Office of Information Resources Management/Modernization, Bureau of Land Management. We have incorporated these comments as appropriate. The ALMRS/Modernization system—slated for deployment at approximately 200 BLM sites around the country—is to be implemented on a common information technology platform. The platform will be composed of servers, terminals, workstations, switching hubs, multiplexers, modems, and firewalls interconnected via local, state, and national-level networks. As planned, the ALMRS environment will initially support existing automated systems, including legacy local area networks and microcomputers. BLM expects that a typical state office installation will consist of several servers supporting major application groups—ALMRS IOC and related databases, office automation applications, GIS applications and related GCDB databases, and E-mail. A typical state office is to provide land and mineral resource data through the state ALMRS IOC server to district and resource area offices. State offices are to be interconnected via a Department of the Interior network. Each district or resource area office is to have its own GIS and office automation servers. BLM users are to access applications via terminals and workstations interconnected through the local, state, and DOI networks. The public is to have access to selected ALMRS information in public access rooms equipped with stand-alone ALMRS IOC servers and terminals. The public access systems are expected to be isolated from the state and district office ALMRS IOC systems for security purposes. BLM is also planning to provide connections to the Internet. The Bureau plans to protect each state office with a firewall system—a security device designed to protect the BLM systems from intrusion by hackers. Figure II.1 shows a high-level overview of the ALMRS/Modernization environment. Accounting and Information Management Division, Washington, D.C. David G. Gill, Assistant Director Mirko J. Dolak, Technical Assistant Director Marcia C. Washington, Senior Information Systems Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Bureau of Land Management's (BLM) modernization of its Automated Land and Mineral Record System (ALMRS), focusing on: (1) BLM progress in developing and implementing ALMRS modernization; and (2) potential modernization risks. GAO found that: (1) although BLM initiated ALMRS planning in the early 1980s, it did not award the modernization contract until 1993 because of numerous changes in the project's concept and scope; (2) BLM has installed most of its initial computer and telecommunications equipment and has met most of its schedule milestones, but it is deferring some equipment deployment until fiscal year (FY) 1996 and FY 1997 because of a lack of funds; (3) project costs are expected to exceed the FY 1996 spending limit by $25.5 million due to added system requirements; (4) schedule slippages may occur because ALMRS modernization is becoming more complicated and BLM has allocated little time to deal with unanticipated problems; and (5) although BLM has recently obtained independent verification and validation of new ALMRS software to ensure that it meets BLM needs, BLM does not plan to stress test the entire ALMRS modernization project to access its ability to meet anticipated peak workloads. |
While DOD does not have a standard, department-wide definition of the IoT, the department has identified a number of existing definitions of it. As noted previously, a 2016 Defense Science Board study defined the IoT as the set of Internet Protocol-addressable devices that interact with the physical environment, noting that “IoT devices typically contain elements for sensing, communications, computational processing, and actuation.” The study identified that IoT devices span a range of complexity and size, including thermostats, traffic lights, televisions, mini-drones, and full-size vehicles. A 2016 DOD Chief Information Officer policy paper on the IoT cited a definition from a non-DOD organization. According to this definition, the IoT consists of two foundational things: 1) the Internet itself, and 2) semi-autonomous devices (the “things”) that leverage inexpensive computing, networking, sensing, and actuating capabilities in uniquely identified implementations to sense the physical world and act on it. Such devices have the capability to connect to the Internet, being Internet Protocol-based, but may also be deployed in stand-alone Internet Protocol networks. These DOD IoT definitions describe devices having the characteristics of sensing, communicating (or networking), computing (or processing), and actuating, and all leveraging the Internet Protocol. Figure 1 depicts typical data flows from a range of IoT devices— smartphones, smart watches, cars, buildings, and televisions—where data are collected, transmitted, and analyzed before leading to commands back to the devices or inputs to decision makers. Consumers and senior leaders in industry or public-sector organizations, such as DOD, can potentially act on IoT device data. In a 2016 report, we provided a primer on the IoT that highlighted key benefits of IoT devices, categories of devices, a future outlook for IoT, and security challenges posed by the devices. We reported that security vulnerabilities in many IoT devices can arise for several reasons, including (1) a lack of security standards addressing unique IoT needs; (2) a lack of better incentives for developing secure devices; and (3) the decreasing size of such devices—which limits the computational power that is currently available to implement security protections. The primer cites reports of wireless medical devices being taken over and controlled; of a widespread wireless standard for IoT devices used in smart energy being compromised; and of gas stations’ tank-monitoring systems having no passwords, thereby potentially exposing the pumps to a risk of being shut down. These security challenges could potentially impact DOD hospitals and facility energy and fuel systems where managers may consider using or deploying IoT devices. In May 2017, we issued a technology assessment on the IoT that defined the concept of the IoT, described its uses, highlighted its benefits, and discussed its potential implications, including security challenges. We reported that adoption of the IoT across the different sectors has amplified the challenge of designing and implementing effective security controls by bringing the potential effects of poor security into homes, factories, and communities. In addition, the technology assessment noted a security risk whereby unauthorized individuals or organizations might gain access to these devices and use them for potentially malicious purposes, including fraud or sabotage. The lack of attention to security in designing IoT devices and the predominant use of cloud computing to provide connectivity with these devices pose unique security challenges. These challenges have direct implications for DOD as the department considers how to develop and deploy these devices. As cyber threats grow increasingly sophisticated, the need to manage and bolster the cybersecurity of IoT products and services is increasingly critical, according to our technology assessment. According to the assessment, while many industry-specific standards and best practices address information security, standards and best practices that are specific to IoT technologies are either still in development or not widely adopted. Any device that is connected to the Internet is at risk of being compromised if it does not have adequate access controls. According to DOD officials, no one specific DOD office or entity is responsible for IoT security. Instead, various DOD organizations have roles and responsibilities related to IoT security risks. For example, Office of the DOD Chief Information Officer is charged with developing the department’s cybersecurity policy and guidance, as well as policy regarding the continuous monitoring of DOD information technology. The DOD Chief Information Officer has issued instructions on cybersecurity, a risk management framework for DOD information technology, and the use of Internet-based capabilities to collect, store, and disseminate information. Within the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, the Office of the Assistant Secretary of Defense for Energy, Installations, and Environment oversees the cybersecurity of industrial control systems on DOD’s facilities— systems that contain IoT devices—and establishes design criteria for these systems that include cybersecurity requirements. Office of the Under Secretary of Defense for Intelligence establishes and oversees the implementation of policies and procedures for the conduct of DOD operations security, physical security, and information security. The office has established policy calling for all DOD missions, programs, functions, and activities to be protected by an operations security program. Office of the Principal Cyber Advisor to the Secretary of Defense is responsible for overall supervision of cyber activities related to, among other things, defense of DOD networks, including oversight of policy and operational considerations. Joint Staff provides guidance on mission assurance assessments— installation-level assessments that integrate information on asset criticality, area-specific hazards and threats, and vulnerabilities to be exploited—and consolidates reporting. The assessments should include benchmarks for the cybersecurity of wireless and portable electronic devices. Military services and DOD agencies are to conduct assessments and surveys of their operations security. Additionally, military services and DOD agencies are to delegate responsibilities for mission assurance assessments and to ensure that information technology under their authority complies with the department’s risk management framework. Defense Information Systems Agency provides security guidance for DOD-owned smartphones and wireless systems. Command Cyber Readiness Inspection teams conduct oversight and assess implementation of this guidance, according to DOD officials. DOD documents and officials identified numerous security risks with IoT devices—as highlighted in table 1—that can generally be divided into risks with the devices themselves and risks with the devices’ operational implications. IoT devices pose numerous risks by how they are designed, manufactured, and configured. According to DOD officials, there is little incentive for manufacturers to design security functions into the software or hardware of their products, resulting in little thought or effort given to security. A DOD Chief Information Officer policy paper also states that IoT devices may be subverted during their manufacture and distribution at various points in the supply chain—thereby rendering the cyber attacker’s job easier. With respect to IoT configuration, a DOD report in 2016 notes that IoT devices are often sold with old and unpatched software that can lead to the device being exploited as soon as it is taken out of the box. Poor password management is another cybersecurity risk. According to the DOD report, a majority of IoT cloud services allow the user to choose weak passwords—such as “1234”—and, in some cases, prevent the user from using strong passwords. Given their functionality and capabilities, IoT devices also pose security risks with their operational implications. DOD officials told us that rogue wireless devices in secure areas could provide a pathway for adversaries to collect classified or sensitive information. For example, a cell phone could be concealed and “pocket dialed” such that ambient conversations are recorded or transmitted. Similar to rogue wireless devices, rogue applications also pose risks. According to a DOD report, in 2016 a smartphone gaming application was released that makes use of the global positioning system and the camera of the device on which it is installed. The report cautions that installing the game may lead to the application gaining full access to a user’s email account. Whether on personal or DOD-issued devices, the potential of such applications to collect location and photographic data on DOD personnel or units and communicate this data to third parties has raised DOD operations security risks. DOD’s 2016 DOD Policy Recommendations for the Internet of Things (IoT) also laid out operations security implications of IoT devices, particularly with the expanded aggregation of information. Specifically, it discussed how information collected through various IoT devices and then aggregated could inform adversaries about DOD capabilities or deployments. For example, an adversary could gather information related to which people were present or which organizations were working overtime. The department has also identified notional threat scenarios that exemplify how these security risks could adversely impact DOD operations, equipment, or personnel. DOD documents and officials from a number of organizations—including the Office of the Secretary of Defense, Joint Force Headquarters-DOD Information Networks, and the Navy—discussed with us a number of notional threat scenarios. Figure 2 highlights a few examples of these scenarios. The first notional IoT scenario, the “sabotage of mission,” illustrates a few security risks that could adversely impact DOD operations. The increase of IoT devices used to monitor and control DOD infrastructure could increase the number of attack points through which a network or system could be attacked. Many of these devices are insecure because of a limited ability to patch and upgrade devices, or due to poor security design. As a result, the successful penetration of a smart electrical meter could lead to cascading effects that negatively impact an industrial control system and degrade an ongoing mission. In the second notional IoT scenario, “sabotage of equipment,” the combination of poor password management and an insider threat could lead to unauthorized access to a utility system, such as a water system in a dry dock. The insider threat could then manipulate the water control system to flood the dry dock and damage the ship, according to Navy officials. The third notional IoT scenario, “operations security and intelligence collection,” illustrates the adverse impacts on operations security that can emerge from smart televisions. The scenario involves a television with limited cybersecurity controls being targeted by commercial providers or adversaries to collect information for malicious purposes. The fourth notional IoT scenario, the “endangerment of leadership,” depicts how an adversary could exploit a car equipped with IoT capabilities. Here, an adversary—for example exploiting poor security in the car’s devices—could hack a senior DOD official’s car to monitor conversations, take control of car functions, or endanger the lives of senior DOD leaders in the car. While DOD has conducted some assessments to examine security risks with IoT devices, threat-based comprehensive operations security surveys (hereinafter referred to as “operations security surveys”) that could examine such risks are not being conducted. DOD requires different types of assessments to protect DOD information residing on and outside the department’s networks. Some of these assessments can be used to identify and examine security risks related to IoT devices. Such assessments include mission assurance assessments, specific threat assessments from the intelligence community—such as the Defense Intelligence Agency’s April 2016 Threats via the Internet of Things—and operations security surveys. According to DOD Directive 3020.40, Mission Assurance, DOD component heads are responsible for implementing the mission assurance process and developing assessments. Mission assurance assessments are installation-level assessments that integrate information on asset criticality, area-specific threats, and vulnerabilities. According to the concept of operations, the mission assurance assessments should examine, among other things, security risks related to infrastructure devices. The 2015 DOD Mission Assurance Assessment Benchmarks lays out specific cybersecurity operations benchmarks, or best practices, that mission assurance assessment teams can use to examine and address security risks related to IoT devices. Some of these benchmarks include: (1) implementing security policies and configurations to ensure secure wireless access into the networks, and taking measures to prevent unauthorized wireless access; (2) conducting vulnerability scans; (3) determining the extent to which remote access is allowed or necessary; and (4) checking on the current configuration information for all industrial control system components. To date, DOD has conducted a number of mission assurance assessments. Three of the four military services—the Army, Navy, and Marine Corps—conducted these assessments and identified cybersecurity risks related to IoT devices on critical infrastructure. While the Air Force did not conduct any assessments in 2016, the service plans to conduct mission assurance assessments in 2017, according to service officials. These officials noted that their assessments will have a limited focus on devices. A 2015 assessment conducted on an Army facility detected cybersecurity vulnerabilities with its IoT devices. The assessment identified how an adversary could hack into industrial control systems’ wireless devices, leading to cascading effects and mission degradation. Additionally, the cybersecurity vulnerabilities of IoT devices in this mission assurance assessment were linked to the benchmarks. Navy and Marine Corps mission assurance assessments also contained recommendations to address IoT cybersecurity vulnerabilities, such as unauthorized communication of information to third parties, rogue wireless devices, and poor security design in the devices. Regarding the unauthorized communication of information to third parties, Marine Corps officials expressed concern over the potential capture of electronic data from a base and transmission of the data to unknown individuals or entities. Some mission assurance assessments recommended discontinuing remote access to systems where possible, implementing wireless intrusion detection systems to detect unauthorized devices, implementing a configuration management process, and conducting vulnerability scans. Assessments from the intelligence community have also identified cybersecurity risks related to IoT devices. For example, officials from the Office of the Director of National Intelligence published an essay on challenges with IoT in which they noted that IoT devices present a rich target for attackers and pose a range of potential risks, including eavesdropping and unauthorized access. According to DOD Directive 5205.02E, DOD Operations Security Program, DOD components must conduct operations security surveys, at a minimum, every 3 years. Also, DOD’s Operations Security Program Manual 5205.02-M requires a threat analysis that includes identifying potential adversaries and their associated capabilities to collect, analyze, and exploit critical information as an essential step in the operations security process. This could potentially include information collected by IoT devices. The Under Secretary of Defense for Intelligence is also required to report annually to the Secretary of Defense on the status of the DOD operations security program. According to DOD officials, IoT devices pose significant risks to operations security. Officials cited the geolocation capability of some IoT devices as a particular concern— specifically, how the location of troops or personnel could be revealed. Another concern is the ability to use IoT devices to clandestinely record conversations. Military service and agency officials cited smart televisions as an example of an IoT device that could secretly record conversations of DOD personnel. DOD has a number of policies as well as guidance for IoT devices, including wearable devices, portable electronic devices, smartphones, and infrastructure devices. Some gaps remain, however, with respect to how DOD addresses security risks associated with IoT in its policies and guidance. DOD has issued a number of policies and guidance for IoT devices, including personal wearable fitness devices, portable electronic devices, smartphones, and infrastructure devices associated with industrial control systems. Generally, these policies and guidance apply across the department’s components. Additionally, many of DOD’s policies and guidance address IoT devices based on areas where classified information is processed, and where it is not. Some military services and agencies have issued additional policy and guidance, such as on personal wearable fitness devices and portable electronic devices. Figure 3 highlights examples of existing DOD policies and guidance for different types of IoT devices. The figure also lists the DOD sponsor of the policy or guidance, the owner of the device, and the type of device for which the policy or guidance applies. This list may not include all department-wide or component policies and guidance on IoT devices but is intended to show a range of policies and guidance on IoT devices. The DOD Chief Information Officer issued a DOD-wide policy on personal wearable fitness devices (e.g., step counting, heart rate monitoring). Other DOD components—including at least two military services and the National Security Agency—have issued similar guidance on these personal devices. The DOD Chief Information Officer policy addresses the use of personally owned (or government-furnished) devices that meet certain requirements in areas where classified information is stored, processed, or transmitted—authorizing these devices in DOD facilities up to the “top secret” level. The policy prohibits devices with photographic, video recording, or microphone or audio recording capabilities, and requires that wireless or connectivity capabilities be disabled. The DOD Chief Information Officer issued a DOD instruction on portable electronic devices able to connect to DOD unclassified and classified wireless local area networks. This instruction identifies a minimum set of security measures, such as antivirus software, encryption, and personal firewalls that must be present in unclassified wireless local area network- enabled portable electronic devices. Several DOD components— including the Defense Information Systems Agency, the Defense Intelligence Agency, and the Department of the Navy—have also issued policies and guidance on these devices. For example, Defense Intelligence Agency employees and visitors must not use video, wireless, photographic, or other recording capabilities of any personally owned portable electronic devices within any agency spaces unless approved in advance for special events (e.g., promotion ceremonies conducted in common areas). Generally, personally owned portable electronic devices with photographic, video recording, audio recording, or wireless transmission capabilities are prohibited in areas where classified information is processed and in other restricted areas. The Defense Information Systems Agency has issued a number of policies as well as guidance that apply to DOD-owned smartphones, including mobile Security Requirements Guides and Security Technical Implementation Guides for specific smartphones (e.g., Apple, Blackberry, and Samsung). For example, the Security Technical Implementation Guides state that department personnel should disable their phones from: 1) data transfers with the Bluetooth capability on DOD’s Blackberry phones; 2) data storage in the iCloud on DOD’s Apple phones; and 3) voice dialing on DOD’s Apple phones. DOD has department-wide policy and guidance that addresses infrastructure devices (e.g., smart electric meters) within industrial control systems. The Unified Facilities Criteria: Cybersecurity of Facility-Related Control Systems lays out criteria for the inclusion of cybersecurity in the design of control systems down to the device level. For example, at the IoT device level, some of these cybersecurity controls include (a) the avoidance of wireless communications to the greatest extent possible; (b) the implementation of authentication between devices, if possible; and (c) the avoidance of mobile code—i.e., code that is downloaded and executed without explicit user action. Additionally, the Advanced Cyber Industrial Control System Tactics, Techniques, and Procedures (ACI TTP) for Department of Defense (DOD) Industrial Control Systems (ICS) offers guidance and identifies procedures that include infrastructure devices. This guidance identifies device anomalies that could indicate a cyber incident, specific detection procedures to assess the anomaly, and procedures to recover electronic devices, including removing and replacing the device. DOD policies highlight the importance of protecting and securing DOD information from any potential adversaries. DOD Directive 8000.01, Management of the Department of Defense Information Enterprise, states that information is considered a strategic asset to DOD and must be safeguarded, appropriately secured and shared, and made available to authorized personnel to the maximum extent allowed by law, policy, and mission requirements. Similarly, DOD Directive 5205.02E, DOD Operations Security (OPSEC) Program, directs that DOD personnel maintain essential secrecy of information that would be useful to adversaries, and that countermeasures are employed to deny adversaries any potential indicators that reveal critical information about DOD missions. Federal internal control standards also require that management evaluate security threats to information technology, which can come from both internal and external sources, and periodically review policies and procedures for continued relevance and effectiveness in addressing related risks. For example, the federal standards note that external threats are particularly important for entities dependent on telecommunications networks and the Internet, and that continual effort is required to address these risks. DOD officials told us that existing DOD policies and guidance do not clearly address security risks relating to smart televisions, and particularly smart televisions in unsecure areas. Officials from military services and other DOD components described smart televisions as a risk to operations security due, in part, to the ability of commercial providers to access the devices remotely—potentially eavesdropping on conversations or sending recordings of these conversations to third parties. Although they acknowledged the need for them, Navy and Marine Corps officials stated that they do not have service-wide policies addressing cybersecurity controls for smart televisions. Officials from Joint Force Headquarters-DOD Information Networks highlighted the potential to “hop” (i.e., gain access) from smart televisions to personal smartphones in close proximity and thereby possibly gain access to non- DOD networks—potentially leading to the collection of data on DOD personnel. Additionally, DOD officials affirmed that existing DOD policies and guidance do not clearly address security risks of applications installed on DOD-issued mobile devices. These risks include rogue applications and the unauthorized communication of data to third parties. For example, these officials highlighted the need for policies that could lead to the automatic removal of unauthorized applications from DOD mobile devices or restrictions on the number of parties to whom data are transmitted from an application. DOD officials confirmed that one gaming application—an example of a rogue application—was downloaded on some unclassified DOD-issued phones. Similarly, a DOD report further identifies the dangers of downloading certain applications and unwittingly granting third parties access to a host of personal information on one’s own phone. According to a Defense Information Systems Agency official, other mobile applications will likely be downloaded with similar security implications unless the policy recommendations noted above are implemented. Core DOD security policies and guidance on cybersecurity, operations security, information security, and physical security do not address IoT devices. First, DOD Instruction 8500.01, Cybersecurity, and DOD Instruction 8510.01, Risk Management Framework (RMF) for DOD Information Technology (IT)—core DOD policies on cybersecurity—do not provide policy and guidance for IoT devices. Although these instructions may apply to IoT devices that are part of a larger system, they neither focus on these devices nor clearly address security risks specific to these devices. DOD officials acknowledged that these instructions do not focus on IoT devices. Similarly, DOD Chief Information Officer’s DOD Policy Recommendations for the Internet of Things (IoT) also recommends a number of policy tenets to inform changes to DOD’s cybersecurity policies, including encryption of IoT data, monitoring of IoT networks for anomalous traffic, and active management of supply chains for IoT devices. Second, core DOD policies and guidance on operations security do not address IoT devices. As noted earlier, adverse impacts on operations security is a key security risk that DOD identified with IoT devices. Although these core operations security policy documents refer to Internet-based capabilities and the data collection capabilities of potential adversaries, they do not offer guidance to mitigate the risks to operations security associated with these devices. Additionally, a key DOD official with department-wide oversight over operations security agreed that DOD policy on operations security could be enhanced by providing guidance and focusing on IoT devices, including a taxonomy for such devices. Third, core DOD policies and guidance we reviewed on information security relating to unclassified DOD information do not address IoT devices. In a 2017 report, we noted that the rapid adoption of IoT devices, the lack of attention to security in the design phase, and the predominant use of cloud computing to provide connectivity with these devices pose unique information security challenges—challenges that could be mitigated in part with DOD guidance on information security. Lastly, core DOD policies and guidance on physical security do not address IoT devices. For example, in one DOD threat scenario, a malicious actor compromises an Internet-connected car of a DOD senior leader and unlocks the doors to abduct the passengers. Table 2 below summarizes core DOD security policies and guidance we reviewed that do not address security risks related to IoT devices. DOD has developed guidance and detailed procedures for defending industrial control systems against cyber attacks. As noted previously, DOD’s Advanced Cyber Industrial Control System Tactics, Techniques, and Procedures (ACI TTP) for Department of Defense (DOD) Industrial Control Systems (ICS) offers guidance to DOD components and identifies procedures for infrastructure devices, including procedures to assess device anomalies and to recover devices that may have been targeted in cyber attacks. According to U.S. Cyber Command officials, the procedures were tested and validated over the course of 2 years, and U.S. Cyber Command also trained and tested the procedures with Navy personnel over a 2-week period to assess their effectiveness. Although the procedures were found to be effective, DOD does not have a policy that directs the implementation of these procedures throughout the department, according to DOD officials. For example, a DOD installations official cited the need to modify existing and future contracts with vendors of utility services to ensure that these cybersecurity procedures would be put in place. Further, Navy and Air Force officials stated that their services do not have a defined plan in place to implement the advanced cyber industrial control system tactics, techniques, and procedures. Navy officials expressed their intent to fully adopt these procedures; however, they cited a current lack of resources and the strain on system operators—who are more focused on non-security issues—as reasons for not yet having implemented the procedures. In addition to the assessments, policies, and guidance discussed above, DOD has taken other actions to address IoT-related security risks. These ongoing efforts include an inventory of systems that incorporate IoT devices, the establishment of forums to discuss DOD IoT policies, and the research of IoT security issues. Inventory of industrial control systems effort: In March 2016, the Office of the Assistant Secretary of Defense (Energy, Installations, and Environment) directed the military departments and certain other DOD components to develop plans to implement cyber security controls on their facility industrial control systems, including devices and sensors. All of the military departments drafted and submitted implementation plans or a strategy to the Office of the Assistant Secretary of Defense (Energy, Installations, and Environment) by February 2017. After the initial inventory phase, DOD components are to make their control systems resilient to cyber threats and to implement a continuous monitoring process to respond to emerging threats. The department’s goal is to implement cybersecurity controls on the most critical control systems by the end of fiscal year 2019. These actions would be consistent with the National Defense Authorization Act for Fiscal Year 2017 and our recommendation in a prior report, which also requires DOD to take actions on the cybersecurity of its industrial control systems. IoT Forum: According to officials in the Office of the DOD Chief Information Officer, the office has established an informal IoT working group for DOD officials working on IoT issues. The group has attended IoT workshops and developed a paper on the IoT. The group authored and published the policy paper DOD Policy Recommendations for the Internet of Things (IoT) in December 2016 to raise awareness of IoT issues. As noted previously, the report discusses the definition of the IoT, the benefits and cybersecurity risks of IoT devices, potential IoT threat scenarios, and DOD policy tenets for addressing the IoT. According to an official in the Office of the DOD Chief Information Officer, their next steps are to establish an IoT community of interest and to produce another IoT report that focuses on DOD component responsibilities and more detailed policy analysis. Research and testing efforts: The Defense Advanced Research Projects Agency has a few ongoing research programs that relate to IoT security issues. The Leveraging the Analog Domain for Security program seeks to develop new cyber techniques in digital devices by monitoring their analog emissions (e.g., radio waves, sound waves, micro-power changes) and is projected to continue through December 2019. By studying analog signals radiating from IoT devices, they intend to better monitor IoT devices and detect deviations from normal device behavior to provide protection for DOD networks. Additionally, the Vetting Commodity Information Technology Software and Firmware program aims to develop checks for broad classes of malicious features and dangerous flaws in software and firmware. The program includes the IoT and other devices and is projected to continue through September 2017. The program seeks to address the department’s need to ensure that the devices and equipment it procures—much of it produced overseas—do not contain hidden code or malware; this could help address the supply chain risk noted previously. The IoT and IoT devices represent the wave of the future for the global economy, from infrastructure to public services to consumer use. DOD will likely be involved in using these devices for the foreseeable future. However, IoT devices pose numerous security challenges that need to be addressed, both in specific instances and as part of a holistic approach to risk management in the information age. DOD has made some progress in addressing the security challenges we identify in this report, including: (1) identifying a number of IoT security risks and notional threat scenarios; (2) examining security risks of IoT devices by conducting assessments on critical infrastructure; (3) developing policies and guidance for IoT devices; and (4) establishing ongoing efforts, such as research programs, to mitigate the security risks with these devices. DOD could capitalize on this progress by further addressing challenges we found in the following areas: the lack of operations security surveys that could identify and mitigate security risks of IoT; insufficient DOD policies and guidance for specific IoT devices and applications of concern (e.g., smart televisions and smartphone applications); and the need for DOD core security policies (e.g., cybersecurity, operations security, physical security, information security) that provide clear guidance on the IoT or IoT devices. By addressing these challenges, DOD could better ensure that it is identifying security issues with IoT devices and more effectively safeguarding and maintaining the security of DOD information. The Under Secretary of Defense for Intelligence, in coordination with the DOD Chief Information Officer, the Under Secretaries of Defense for Policy; Acquisition, Technology, and Logistics; and Personnel and Readiness; and with military service and agency stakeholders, should conduct operations security surveys that identify IoT security risks and protect DOD information and operations, in accordance with DOD guidance, or address operations security risks posed by IoT devices through other DOD risk assessments. The Principal Cyber Advisor, in coordination with the DOD Chief Information Officer; the Under Secretaries of Defense for Policy; Intelligence; Acquisition, Technology, and Logistics; and Personnel and Readiness; and with military service and agency stakeholders, should Review and assess existing departmental security policies and guidance—on cybersecurity, operations security, physical security, and information security—that may affect IoT devices; and Identify areas where new DOD policies and guidance may be needed—including for specific IoT devices, applications, or procedures—and where existing security policies and guidance can be updated to address IoT security concerns. We provided a draft of this report to DOD and the Office of the Director of National Intelligence. DOD provided written comments, in which it concurred with our two recommendations. DOD’s written comments are reprinted in their entirety in appendix II. DOD also provided technical comments, which we incorporated into the report where appropriate. The Office of the Director of National Intelligence did not provide technical comments. DOD concurred with our recommendation to conduct operations security surveys that identify IoT security risks and protect DOD information and operations, in accordance with DOD guidance, or address operations security risks posed by IoT devices through other DOD risk assessments. The department stated that it will take action in accordance with its existing policies for operations security. DOD concurred with our recommendation to review and assess existing departmental security policies and guidance—on cybersecurity, operations security, physical security, and information security—that may affect IoT devices; and to identify areas where new DOD policies and guidance may be needed—including for specific IoT devices, applications, or procedures—and where existing security policies and guidance can be updated to address IoT security concerns. The department stated that it has already begun work in this area and should complete a review of its policies and guidance affected by loT by the end of the fourth quarter, fiscal year 2017. DOD also stated that updates to address IoT will be done as part of the department’s policy update process. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Intelligence, DOD’s Principal Cyber Advisor, the Under Secretaries of Defense for Policy; Acquisition, Technology, and Logistics; and Personnel and Readiness; DOD’s Chief Information Officer, and the Director of National Intelligence. In addition, the report is available at no charge on the GAO website http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-9971 or kirschbaumj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The objectives of this report were to (1) address the extent to which Department of Defense (DOD) has identified and assessed security risks related to Internet of Things (IoT) devices; (2) assess the extent to which DOD has developed policies and guidance related to IoT devices; and (3) describe other actions DOD has taken to address security risks related to IoT devices. The scope of this review includes a range of IoT devices, to include wearable fitness devices, portable electronic devices, smartphones, and infrastructure devices, but it excludes weapon systems—such as airplanes and tanks—and intelligence, surveillance, and reconnaissance networks, which could be described as an example of the IoT. In addition, we assessed IoT devices and their related security challenges, and we excluded from our review the back-end computing and analytic infrastructure, such as computer servers, that can store and process IoT device data. To address the extent to which DOD has identified and assessed security risks related to IoT devices, we reviewed DOD reports on IoT, including reports from the Defense Science Board, the Office of the DOD Chief Information Officer, the Defense Intelligence Agency, and Joint Staff, that identified broad security risks with IoT devices. We also interviewed officials from a number of organizations—including the Office of the Secretary of Defense, Joint Force Headquarters-DOD Information Networks, the military services, the Defense Information Systems Agency, the National Security Agency, the Defense Intelligence Agency, and the Defense Advanced Research Projects Agency—to identify key security risks associated with IoT devices. After these interviews and reviews, we grouped identified risks into common categories. We examined DOD notional threat scenarios that depict consequences ensuing from compromised IoT devices. Officials in the Office of the Secretary of Defense, the Navy, the Defense Information Systems Agency, and Joint Force Headquarters-DOD Information Networks developed these scenarios. Through our interviews with organization officials, we identified various types of risk assessments that may address security risks related to IoT devices. We reviewed the focus areas of these assessments and identified whether they examined IoT devices. We compared these assessments against DOD criteria. We collected and analyzed a non-generalizable sample of these assessments to review. For the mission assurance assessments, we requested and received a sample of documents from the services to review. From our request, we received and reviewed a total of 11 mission assurance assessments—2 from the Army, 2 from the Navy, and 7 from the Marine Corps. With respect to intelligence assessments, we requested and received 1 assessment from the Defense Intelligence Agency and 1 from the Office of the Director of National Intelligence—documenting the challenges related to the IoT. To assess the extent to which DOD has developed policies and guidance related to IoT devices, we interviewed officials from the Office of the Secretary of Defense, the Joint Staff, the military services, the Defense Information Systems Agency, U.S. Cyber Command, the Defense Intelligence Agency, the National Security Agency, and the Defense Logistics Agency to identify current policies and guidance applying to a range of IoT devices, including wearable fitness devices, portable electronic devices, smartphones, and infrastructure devices. We reviewed these policies and guidance—including the DOD Chief Information Officer’s DOD Policy Recommendations for the Internet of Things (IoT)— and identified their general characteristics, applicability, and focus areas. When we interviewed officials from the organizations noted above, we also asked them whether there are any gaps in policies and guidance for IoT devices, applications, or procedures. We compiled their responses to identify a few commonly cited policy and guidance gaps where security risks may not be addressed. Additionally, we reviewed core DOD security policy documents on cybersecurity, operations security, physical security, and information security (see table 2 in the report) to assess whether these documents addressed IoT devices or security risks associated with IoT devices. We used relevant search terms such as “device,” “capabilities,” and “threat” to make these assessments. Federal internal control standards require that management evaluate security threats to information technology and periodically review policies and procedures for continued effectiveness in addressing related risks, so we asked officials whether the department was addressing risks related to IoT devices. To describe other actions DOD has taken to address security risks related to IoT devices, we interviewed officials from a number of organizations— including the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics; Office of the DOD Chief Information Officer; the National Security Agency; the Defense Advanced Research Projects Agency; the Defense Intelligence Agency; and the military services—and collected documents to identify and describe ongoing efforts and actions to address and mitigate security risks relating to IoT devices. We grouped ongoing efforts they identified into categories, such as research, inventory tasks, forums, and the development of use cases. Due to the limited number of ongoing efforts directly tied to IoT we could identify, we developed a small number of categories—which captured all of these efforts—by distinguishing among the primary focuses of these efforts. These focuses included long-term knowledge building, information collection on assets, intra-departmental collaboration, and the development of threat scenarios or environments. To address our reporting objectives, we reviewed relevant documents and interviewed knowledgeable officials from the following DOD organizations and offices as identified in table 3. We also interviewed officials from three non-DOD organizations, including the Office of the Director of National Intelligence, the Internet Society, and the National Institute of Standards and Technology. We interviewed the Office of the Director of National Intelligence to gain a non-DOD intelligence community perspective of cyber issues related to IoT devices. We also interviewed the Internet Society to collect insights on IoT issues from a non-governmental organization. Lastly, we interviewed the National Institute of Standards and Technology as they have issued a number of cybersecurity documents, including those that apply to IoT devices. We conducted this performance audit from June 2016 to July 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, key contributors to this report were Tommy Baril (Assistant Director), Ivelisse Aviles, Tracy Barnes, John Beauchamp, Jennifer Beddor, Robert Breitbeil, Jennifer Cheung, Amie Lesser, and Cheryl Weissman. GAO, Technology Assessment: Internet of Things: Status and Implications of an Increasingly Connected World, GAO-17-75 (Washington, D.C.: May 15, 2017). GAO, Data and Analytics Innovation: Emerging Opportunities and Challenges, GAO-16-659SP (Washington, D.C.: Sep. 20, 2016). GAO, Defense Infrastructure: Improvements in DOD Reporting and Cybersecurity Implementation Needed to Enhance Utility Resilience Planning, GAO-15-749 (Washington, D.C.: July 23, 2015) GAO, Vehicle Cybersecurity: DOT and Industry Have Efforts Under Way, but DOT Needs to Define Its Role in Responding to a Real-world Attack, GAO-16-350 (Washington, D.C.: Mar. 24, 2016). | Congress included provisions in reports associated with two separate statutes for GAO to assess the IoT-associated security challenges faced by DOD. This report (1) addresses the extent to which DOD has identified and assessed security risks related to IoT devices, (2) assesses the extent to which DOD has developed policies and guidance related to IoT devices, and (3) describes other actions DOD has taken to address security risks related to IoT devices. GAO reviewed reports and interviewed DOD officials to identify risks and threats of IoT devices faced by DOD. GAO also interviewed DOD officials to identify risk assessments that may address IoT devices and examined their focus areas. GAO further reviewed current policies and guidance DOD uses for IoT devices and interviewed officials to identify any gaps in policies and guidance where security risks may not be addressed. The Internet of Things (IoT) is the set of Internet-capable devices, such as wearable fitness devices and smartphones, that interact with the physical environment and typically contain elements for sensing, communicating, processing, and actuating. Even as the IoT creates many benefits, it is important to acknowledge its emerging security implications. The Department of Defense (DOD) has identified numerous security risks with IoT devices and conducted some assessments that examined such security risks, such as infrastructure-related and intelligence assessments. Risks with IoT devices can generally be divided into risks with the devices themselves and risks with how they are used. For example, risks with the devices include limited encryption and a limited ability to patch or upgrade devices. Risks with how they are used—operational risks—include insider threats and unauthorized communication of information to third parties. DOD has developed IoT threat scenarios involving intelligence collection and the endangerment of senior DOD leadership—scenarios that incorporate IoT security risks (see figure). Although DOD has begun to examine security risks of IoT devices through its infrastructure-related and intelligence assessments, the department has not conducted required assessments related to the security of its operations. DOD has issued policies and guidance for IoT devices, including personal wearable fitness devices, portable electronic devices, smartphones, and infrastructure devices associated with industrial control systems. However, GAO found that these policies and guidance do not clearly address some security risks relating to IoT devices. First, current DOD policies and guidance are insufficient for certain DOD-acquired IoT devices, such as smart televisions in unsecure areas, and IOT device applications. Secondly, DOD policies and guidance on cybersecurity, operations security, information security, and physical security do not address IoT devices. Lastly, DOD does not have a policy directing its components to implement existing security procedures on industrial control systems—including IoT devices. Updates to DOD policies and guidance would likely enhance the safeguarding and securing of DOD information from IoT devices. This is an unclassified version of a sensitive report GAO issued in June 2017. GAO recommends that DOD (1) conduct operations security surveys that could address IoT security risks or address operations security risks posed by IoT devices through other DOD risk assessments; and (2) review and assess its security policies and guidance affecting IoT devices and identify areas, if any, where new DOD policies may be needed or where guidance should be updated. DOD reviewed a draft of this report and concurs with GAO's recommendations. |
In 1999, the Congress enacted the D.C. College Access Act for the purpose of expanding higher education opportunities for college-bound D.C. residents in an effort to stabilize D.C.’s population and tax base. The act created the D.C. TAG Program, a residency-based tuition subsidy program, which allows D.C. residents to attend participating public universities and colleges nationwide at in-state tuition rates. UDC is not eligible to participate in the TAG Program because in-state tuition rates are already available for D.C. residents. The TAG Program also provides smaller grants for students to attend private institutions in the D.C. metropolitan area and private HBCUs in Maryland and Virginia. An eligible institution may participate in the grant program only if the institution has formally signed a Program Participation Agreement with the mayor of the District of Columbia. Students attending a participating public institution can receive a tuition subsidy of up to $10,000 per year (calculated as the difference between in-state and out-of-state tuition rates), with a total cap of $50,000 per student. D.C. residents attending private institutions in the D.C. metropolitan area and private HBCUs in Maryland and Virginia may receive an annual grant award of up to $2,500 per year, with a total cap of $12,500 per student. The grant funding can be applied only to a student’s tuition and fee costs and must not supplant other grant funding that the student is eligible to receive. As a result, the tuition assistance grant must be considered as the final or “last dollar” that is added to a student’s financial aid package. Since the grant can be applied only to tuition and fees, other costs associated with college attendance, such as room and board fees and transportation costs, must be paid by other means. The D.C. government received $17 million in each of fiscal years 2000 and 2001 to implement the grant program and to provide grants to qualified applicants. As of August 2001, the TAG Program disbursed approximately $11 million for grants and administration. Consequently, the D.C. government maintains a grant balance of approximately $23 million. The act (P.L. 106-98) states that the funding shall remain available until expended. The TAG Program office engaged in a variety of publicity and outreach efforts to both D.C. residents and eligible institutions to promote the TAG Program in its first year of operation. Efforts to inform potential applicants about the TAG Program included staff visits to public and private high schools in D.C., information about the program mailed to every D.C. public high school senior, radio advertisements, and marketing posters at subway and bus stations around the city. TAG Program staff also worked with staff at the D.C. College Access Program (D.C. CAP) to provide information to D.C. public schools about the grant. The D.C. CAP is a nonprofit organization, funded by a consortium of 17 private sector companies and foundations, whose intent is to complement the TAG Program by encouraging D.C. public high school students to enter and graduate from college. D.C. CAP provides D.C. public school students with support services both before and during college, including placing college advisors in each public high school beginning in academic year 2000-01, assisting students with college and financial aid applications, and providing both information resources at D.C. public high schools and educational planning workshops for students and parents. TAG Program staff provided training and information about the grant to D.C. CAP college advisors. In order to inform eligible institutions about the grant program, staff mailed information to the president and financial aid officer of each public institution and eligible private institution. In addition, the Secretary of Education sent a letter to each chief executive officer of public higher education undergraduate institutions nationwide in July 2000, providing information about the grant program and urging institutions to sign a Program Participation Agreement with the mayor of the District of Columbia. Currently, if a grant-eligible applicant decides to attend an eligible but nonparticipating institution, the TAG Program staff contact the institution and provide information on the program as well as on the participation agreement. However, according to the TAG Program director, the applicant and his or her family often play a vital role in persuading the institution to sign an agreement with the program. In order to be eligible for the grant, an applicant must meet certain criteria, including graduation from any high school or attainment of a secondary school equivalency diploma after January 1998 and enrollment or acceptance for enrollment in an undergraduate program at an eligible institution. Applicants must also be domiciled in D.C. for 12 consecutive months prior to the start of their freshman year of college and must continue to maintain their primary residence in D.C. throughout the grant period. In academic year 2000-01, approximately 3,500 individuals applied for the grant and 70 percent, or approximately 2,500 individuals, met the eligibility criteria. Twenty-two percent of the applicants, on the other hand, were found ineligible for the grant, and about 8 percent of the applications were pending or inactive at the time of our review. The reasons for which applicants were found ineligible include not meeting the statutory requirements pertaining to graduation and domicile. All of the wards in D.C. were represented in the applicant pool. Although D.C. comprises 8 wards, most of the applicants resided in wards 4, 5, and 7, which are located primarily in the northeast and southeast quadrants of D.C. The greatest percentage of college-age residents applying for the grant came from these three wards. Figure 1 shows the percentage of college-age residents in each D.C. ward that applied for the grant. About 1,900 eligible applicants used the grant to attend 152 participating public and private institutions in academic year 2000-01. Almost half of the applicants came directly from high school, with nearly 70 percent of the applicants who recently graduated from high school coming from a D.C. public high school. The remaining applicants were already enrolled in college. Approximately 97 percent of the grant recipients for whom data was available enrolled in college full-time. Eighty-six percent of TAG recipients attended a 4-year institution, and 14 percent attended a 2-year college. Seventy-six percent of the eligible applicants who used the grant attended a public institution, with an average grant per fall and spring semester of nearly $2,900, whereas the remaining 24 percent attended a private institution with an average grant per fall and spring semester of approximately $1,200. Overall, 18 percent of the applicants attended an open-admission institution, and almost 40 percent enrolled at a public or private HBCU. Figure 2 provides more detailed information on the number of TAG recipients who attended college in each state in academic year 2000-01. Initially, the act included only public institutions and private HBCUs in Maryland and Virginia, as well as private institutions in the D.C. metropolitan area, as eligible to participate in the TAG Program. In May 2000, the program was expanded to include all public colleges and universities nationwide. Not all of these colleges and universities participate in the program, however, though they are eligible to do so. Currently, 514 public and private institutions have formally agreed to participate. Participating institutions are located in every state, D.C., and Puerto Rico. Sixty-two participating institutions are located in D.C., Maryland, and Virginia. Appendix II provides a list of the institutions that had signed a participation agreement with the D.C. government as of December 10, 2001. Before the program’s nationwide expansion, the TAG Program office promulgated the initial regulations for administration of the program. In the fall of 2000, four large public institutions—the University of California, the University of Florida, the University of Michigan, and the State University of New York—refused to sign the Program Participation Agreement, claiming that the regulations were overly burdensome. Subsequently, in December 2000, the TAG Program office revised the regulations, and all four institutions signed the agreement. Current proposed legislation, H.R. 1499, would make changes to the TAG Program, including modifying some of the student eligibility requirements. The bill would expand eligibility for the grant to include D.C. residents who both begin their college education more than 3 years after they graduated from high school and who graduated from high school prior to January 1, 1998, provided that they are currently enrolled in an eligible institution. Eligible applicants would be required to meet the citizenship and immigration requirements currently specified in the Higher Education Act of 1965. The bill would expand the list of eligible institutions to include private HBCUs nationwide. In addition, the bill would require the D.C. Government to establish a dedicated account for TAG Program funding and would clarify the use of administrative funding by the program office. The bill passed the House of Representatives in July 2001, and was amended by and passed the Senate in December 2001; the amended bill is currently pending before the House. The Department of Education’s Inspector General (IG) completed an audit of the TAG Program finances in August 2001. The IG’s audit provided findings in the areas of administrative funding and interest income and made recommendations to address each of these issues. Of the nearly 2,500 applicants who were eligible for the tuition assistance grant, 21 percent—or 516 applicants—did not use the grant in academic year 2000-01 and some of these applicants may have faced barriers due to college entrance requirements and the absence of minority outreach programs. Whether college enrollment caps had any impact on college access for these applicants is unclear. According to the parents who responded to our parent survey, eligible applicants did not use the grant for a variety of reasons, including decisions to postpone college attendance or enroll in an ineligible school and rejection for admission at schools participating in the TAG Program. College entrance requirements may have been a barrier to college access for some eligible applicants who did not use the grant in academic year 2000-01. Entrance requirements vary at postsecondary institutions—from only requiring a high school diploma or equivalent to reviewing a combination of high school GPA, SAT or other college entrance examination scores, and essays. Since data on college entrance requirements were not readily available, we used average freshmen high school GPA and SAT scores as a proxy for college entrance requirements. We requested GPA and SAT scores for 290 of the 516 eligible applicants who did not use the grant—those who had recently graduated from a D.C. public high school—from D.C. public school officials and compared these data to high school GPA and SAT scores for entering freshmen at the 62 institutions that the applicants were interested in attending. Although the average high school GPA for entering freshmen at a majority of the 62 institutions was 3.0 or higher, the average GPA for 183 of the applicants for whom data were available was 2.36. Furthermore, whereas the median combined SAT score for 150 of the applicants for whom data were available was 735, entering freshmen at a majority of these institutions had median combined SAT scores higher than 735. For example, these institutions reported median combined SAT scores between 800 and 1400. The absence of minority outreach programs at these institutions may have also been a barrier to college access for some of the D.C. public school students who were eligible for, but did not use, the grant. Approximately 97 percent of D.C. public school students are considered members of a racial minority, but outreach programs specifically geared toward minority students existed at only 24 of the institutions, excluding those that are considered an HBCU, that these applicants expressed interest in attending, and for which data were available. For example, the University of Arizona’s minority outreach efforts include favorable consideration of minority status in financial aid decisions. At Catholic University of America, outreach efforts include allowing a limited number of talented minority high school seniors to take college courses free of charge. Our survey of all participating institutions, beyond the institutions that D.C. public school students were interested in attending, showed that other minority outreach efforts include recruiting visits to high schools with large minority student populations and waiving of out-of-state enrollment cap restrictions for minority applicants. Whether caps on the number of out-of-state residents who can enroll at an institution served as a barrier to college access for these eligible TAG applicants is unclear. Some public postsecondary institutions have policies that limit the percentage of undergraduates who may enroll from outside the state or who may be admitted as freshmen to the institution. For example, the University of Virginia allows 35 percent of undergraduate students to enroll from outside Virginia, while the University of North Carolina at Chapel Hill caps out-of-state enrollment for undergraduates at 18 percent. Such policies exist at about 21 percent of the 62 institutions for which data were available. The parents of some eligible applicants provided a variety of reasons why the applicants did not use the TAG funding during academic year 2000-01. Of the 213 parents who provided information on eligible applicants, 31 percent indicated that their son or daughter applied to but did not enroll in a college or university, 15 percent indicated that their child decided not to apply to college, and 54 percent indicated that their son or daughter attended a college or university in academic year 2000-01. Most of the grant-eligible applicants who did not use the grant attended institutions that were not eligible to participate in the TAG Program, and their parents indicated that the institution chosen best met their child’s educational or financial needs. Examples of ineligible colleges these applicants attended included UDC and private HBCUs outside D.C., Maryland, or Virginia. Most parents of grant-eligible applicants who applied to but did not enroll in a college indicated that their child either wanted to postpone college or did not enroll due to personal reasons. For example, one parent told us that her daughter delayed college because of the birth of a child, while another parent told us that her son wanted to wait to improve his SAT scores. Fifty-one students were not accepted to an eligible TAG college or university, and of these students, 10 of those were not accepted by any college or university. Due to a low response rate of 42 percent, however, our results cannot be considered generalizable to all of the parents in our survey. The change in enrollment at UDC during the first year of the TAG Program was minimal, and UDC appears to be serving a different freshman population than the population served by the TAG Program. Fall semester enrollment has remained stable since 1998, and in academic year 2000-01, 18 students left UDC and used the grant funding to attend a TAG- participating college or university. The UDC officials we spoke with believed that the TAG Program would likely have little impact on UDC’s enrollment level, in part because of the diverse student population that UDC serves. UDC enrollment has changed little since the TAG Program began offering grants to D.C. residents. Between the 1999-00 and 2000-01 academic years, total undergraduate enrollment at UDC increased by about 1 percent. As shown in figure 3, UDC enrollment for fall 2000, the first semester that tuition assistance grants were awarded, was 5,008, close to the enrollment for the previous two fall semesters. In addition, entering freshmen enrollment has remained fairly stable over the past 3 years. Freshmen enrollment increased 0.4 percent—from 1,859 to 1,867—between the 1999- 00 and 2000-01 academic years. UDC officials we interviewed believed that because the TAG Program was in only its first year, it had not affected enrollment at UDC. They expressed concern, however, that students cannot use the grant to attend UDC and noted that a grant could prove beneficial, because many UDC students rely on financial aid to pay for tuition costs, even though tuition rates are low. In the first year of the TAG Program, fewer than 20 students left UDC to use the tuition assistance grant. Overall, 136 TAG applicants were enrolled at UDC when they applied for the grant. Of that number, only 18 students determined to be eligible for the grant used the funding to attend a school other than UDC in academic year 2000-01. During academic year 2000-01, the average freshman entering UDC differed markedly from the average TAG recipient entering college as a freshman. For example, the average age of freshmen entering UDC was 29 years, compared with an average age of almost 20 years for TAG recipients entering college as freshmen. In addition, whereas most UDC freshmen were enrolled as part-time students, almost all freshmen that received the tuition assistance grant were enrolled as full-time students. Finally, a higher percentage of TAG freshmen recipients graduated from a high school in D.C., Maryland, or Virginia, compared with UDC freshmen. These differences in the two populations suggest that UDC and the TAG Program draw on different student populations. In fact, the UDC officials we spoke with felt that the impact of the TAG Program would not be large because of the differing groups of college students that UDC and the TAG Program serve. Table 1 shows the profiles of UDC and TAG college freshmen for academic year 2000-01. Although most concerns about administration of the TAG Program that were initially raised by four large institutions were largely resolved by the revision of the regulations in December 2000, some administrative issues exist that may hinder program operations. Our review of the TAG Program identified issues with the procedure that TAG staff use to determine eligibility for the grant when applicants list on their grant applications only ineligible institutions as schools they are interested in attending. We also found that unclear and potentially misleading information about participating institutions is being disseminated by the TAG Program office in both an informational pamphlet to TAG applicants and in letters sent to eligible applicants. Some concerns about the initial TAG Program voiced by four participating institutions have been resolved. Some officials at these four institutions initially expressed apprehension regarding the institutional requirements contained in the original program regulations. For example, the officials whom we spoke with at the four institutions felt that program requirements—including the requirements that institutions conduct an annual compliance audit, maintain records that duplicate those held by the TAG Program office, and confirm student eligibility—would be burdensome for their institutions. University officials whom we spoke with at these institutions indicated that most of their initial concerns were resolved when program regulations were revised in December 2000. In fact, all four institutions have now signed a Program Participation Agreement with the mayor of the District of Columbia, formally agreeing to participate in the grant program. In general, the few remaining administrative concerns mentioned by the university officials we spoke with did not appear to be problematic at the majority of the institutions that enrolled tuition assistance grant recipients in academic year 2000-01. For example, although officials from two of the four universities stated that administering the grant required the time- consuming task of creating a separate financial aid process, officials from 74 percent of the participating institutions that we surveyed indicated that they did not have to create a new process for TAG students. Furthermore, officials from more than half of the participating institutions reported that the administration of the grant did not require additional university staff time. Among those who said that it took longer to administer the grants than to determine financial aid for students not receiving the grants, the majority indicated that the administration process took less than 10 minutes longer. Some of the university officials that we interviewed indicated that the program regulation requiring that their institutions wait to bill the TAG Program office until the end of the drop/add period—sometimes as long as 30 days after the start of classes—resulted in late payment for schools. According to the officials, waiting for grant payments contravenes the practice at many institutions—some of which are bound by state law—to collect tuition and fees before the first day of class. At the University of California, for example, officials told us that this regulation required that the institution provide a loan to the student to cover tuition costs for the period between the first day of classes and the university’s receipt of the grant funding from the TAG Program office. However, whereas approximately 57 percent of the participating institutions have such a statutory or institutional requirement, nearly 70 percent of the institutions we surveyed stated that similar delays in tuition payments affect students in other grant programs. TAG Program officials said that they will review the possibility of changing the drop/add requirement for academic year 2002-03. In addition, while three of the schools we interviewed initially felt that the record-keeping requirements for the TAG Program were more burdensome than was necessary for a relatively small program, more than two-thirds of the participating institutions indicated that the record keeping was not significantly different from that for other financial aid programs they administer. In the first year of the grant program, some applicants who were found ineligible for the grant did not receive a full and consistent review of their eligibility factors by TAG staff. Nearly half of all applicants who were deemed ineligible were so assessed because they listed on their grant applications only ineligible institutions as schools they were likely to attend. TAG staff told us that because of the volume of grant applications received in the first year, the staff did not verify all eligibility factors for applicants listing only ineligible institutions on their applications. TAG staff stated that these applicants were sent a letter of ineligibility solely on the basis of the applicants’ listing of ineligible schools on their applications. According to TAG staff, they informed the applicants by telephone that because the institutions they listed were ineligible for the grant program, the applicants would receive a letter of ineligibility for the grant. From the applicants who were deemed ineligible because they listed ineligible institutions, we randomly selected 75 files to review in depth. Our review indicated that the TAG staff might not have checked the domicile criterion for 55 percent of applicants or the graduation criterion for 11 percent of applicants. Furthermore, our review showed that for nearly 40 percent of applicants, no record existed of their being contacted by telephone. For the current year of the grant program—academic year 2001-02—TAG staff members have indicated that they will discontinue their attempts to contact by telephone those applicants who list only ineligible institutions. Instead, these applicants will automatically receive ineligibility letters. In addition, the TAG Program office is disseminating unclear and misleading information to potential applicants regarding which postsecondary institutions have agreed to participate in the grant program. The TAG Program office provides potential applicants with a pamphlet that is meant to inform the applicant as to which colleges and universities he or she can attend with the grant. However, this pamphlet lists approximately 2,000 postsecondary institutions as “participating,” even though just 514 of these institutions have formally agreed to participate in the grant program by signing a Program Participation Agreement with the mayor of the District of Columbia. According to the TAG Program director, this pamphlet lists all of the institutions that are eligible to participate in the TAG Program—rather than just those that have agreed to participate—to provide applicants with information on the full range of institutions they could theoretically attend with the grant. The director felt that listing only the participating institutions might discourage individuals from applying for the grant. Misleading information is also provided to grant-eligible TAG applicants in the award letter. This letter is to be either sent or taken as proof of grant eligibility to the college or university the eligible applicant decides to attend. However, the letter states that the TAG Program office will pay tuition “at any U.S. public college or university that you attend,” without informing the applicant that not all of these institutions have agreed to participate in the TAG Program. Therefore, an applicant choosing to attend an institution that is eligible but not currently participating may experience difficulty or delay with receiving the grant because of the time it could take to convince the institution to participate in the program— possibly occurring after the applicant has enrolled at the institution. In addition, eligible applicants who, for example, list one eligible institution and one ineligible institution on their grant application receive a standard letter of eligibility, which does not inform the applicant that one of the institutions may not be eligible for the grant. Therefore, this applicant may not be aware that he or she will not receive the grant if he or she chooses to attend the ineligible institution listed on his or her grant application. The TAG Program director believes that the letter sent to applicants is clear in that it states that the grant can only be used at eligible institutions. TAG Program officials said that they are currently reviewing TAG Program operations and procedures. Since the establishment of the TAG Program, D.C. residents have more resources available to attend college if they choose an eligible institution that agrees to participate in the grant program. However, although the TAG Program’s purpose is to expand higher education opportunities for D.C. residents, a few of the program’s procedures may inadvertently discourage and hinder some D.C. residents from receiving grant money. The practice of determining that applicants are ineligible when they list only ineligible institutions on their grant applications could deny applicants who meet the student eligibility requirements the resources that they need for college solely because of the institutions they expressed an interest in attending. This practice is also troublesome given that at the time applicants submit their grant applications to the TAG office, they are not required to have enrolled at or even submitted a college application to the postsecondary institutions they list on their applications. In addition, the award letter and pamphlet that do not clearly notify applicants that an institution in which they are interested is ineligible or not participating in the TAG Program, may confuse applicants who then choose to attend ineligible or nonparticipating institutions. These factors could lead to frustration among applicants and may cause some D.C. residents to discontinue their efforts to obtain grant assistance to attend a postsecondary institution. We recommend that the mayor of the District of Columbia direct the TAG Program office to Change the current applicant eligibility determination process to ensure that (1) all applicants receive a full review to determine their eligibility to receive the grant, (2) eligible applicants who indicate interest only in ineligible institutions are made aware in their award letters that the institutions listed on their applications are ineligible and that an eligible school must be selected for the applicants to receive the tuition assistance grant, and (3) all letters sent to eligible applicants indicate which institutions have already formally agreed to participate in the grant program. Indicate clearly in the pamphlet promoting the TAG Program which eligible postsecondary institutions have already formally agreed to participate in the grant program. We obtained comments on a draft of this report from the U.S. Department of Education, the mayor of the District of Columbia, and UDC. The comments from the mayor and UDC are reproduced in appendixes III and IV, respectively. Education only provided technical clarifications, which we incorporated when appropriate. UDC also provided technical clarifications that we incorporated when appropriate. The mayor of the District of Columbia generally agreed with the findings of our report and concurred with our recommendation that the TAG Program office conduct a full review of all applicants to determine their eligibility to receive the grant. However, as to our recommendation that the TAG Program office clearly indicate to applicants which eligible postsecondary institutions have signed a Program Participation Agreement, the mayor disagreed, stating that advertising only those institutions that have formally agreed to participate would decrease the accessibility of the program. The mayor stated that students would become discouraged if they saw that the institutions they were interested in attending were not listed in TAG Program literature. Our recommendation, however, does not preclude the TAG Program office from providing applicants a list of all institutions that are potentially eligible to participate in the program, but rather recommends that the TAG Program office separately identify those institutions that have formally agreed to participate. By providing this additional information, we believe that potential applicants will be better informed about the status of the postsecondary institutions they are interested in attending. We do not believe that this additional information would discourage D.C. residents from applying for the grant program and may avoid confusion for those eligible applicants who choose to apply to currently nonparticipating institutions. Finally, the mayor disagreed with the title of the report, commenting that the title is not borne out by the contents of the report. We changed the title to address his concerns. Many of the comments made by UDC were related to the potential impact of the TAG Program on UDC and the funding levels of the TAG Program. UDC stated that although enrollment levels have not significantly changed as a result of the implementation of the TAG Program, UDC officials believe the TAG Program may have impacted the quality of the entering freshmen at UDC and that the institution is losing some of the better- prepared college-bound students in D.C. to institutions that are participating in the TAG Program. While we recognize the importance of analyzing student quality, such an analysis was outside the scope of the mandate and the request. UDC further believes that the reporting of the average age and enrollment status of UDC freshmen does not tell the complete story of the type of student that is served by the institution. They stated that UDC students range in age from 17 years to 55 years and that most students must work full-time to meet personal and family responsibilities. We focused our comparison of UDC and TAG Program freshmen on average student age, enrollment status, and location of high school the student graduated from because these were among the only data available from both UDC and the TAG Program that allowed a direct comparison of the types of students that each were serving. UDC officials also provided updated data on the location of high schools attended by UDC entering freshmen, which we incorporated. Regarding the funding of the TAG Program, UDC believed that an examination of the funding levels for the TAG Program were needed and suggested that any unused funding for the TAG Program could be reallocated to UDC to enhance education programs and scholarships for UDC students. In addition, UDC commented that further examination of various aspects of the TAG Program were necessary, including an analysis of graduation outcomes for TAG Program participants, the impact of the TAG Program on the quality of UDC students and UDC’s program and services, as well as the financial impact of the TAG Program on D.C. residents. While we recognize that these issues are important, they were not within the scope of the mandate or the request. We are sending copies of this report to the House Committee on Government Reform, the Senate Committee on Governmental Affairs, and other interested committees; the Secretary of Education; and other interested parties. Copies will also be made available to others upon request. Please contact me at (202) 512-8403 or Diana Pietrowiak, Assistant Director, at (202) 512-6239 if you or your staff have any questions concerning this report. Other GAO contacts and staff acknowledgments are listed in appendix V. A variety of data sources allowed us to examine different aspects of the D.C. Tuition Assistance Grant (TAG) Program. We wanted to explore several issues, such as the extent to which TAG-eligible applicants who did not use the tuition assistance grant faced barriers to college access, how student enrollment at the University of the District of Columbia (UDC) has changed since the TAG Program began, whether UDC and TAG serve similar freshmen populations, and whether there are program administration issues that could potentially hinder the TAG Program operations. We selected data sources that would allow us to examine these issues. To review and summarize general information on TAG applicants, we obtained a database from the TAG Program office listing applicant data, such as name of high school attended, year of college enrollment, and date of birth. These data, which we did not verify, represent the only information available on TAG applicants. To determine whether eligible applicants who did not use the tuition assistance grant may have faced barriers to college access, we obtained data from the TAG Program office on applicants who applied and were found eligible for the grant, but did not use the grant in academic year 2000-01. We then analyzed the academic qualifications of some of these eligible applicants and compared these data with similar data on average freshmen at the postsecondary institutions they listed on their TAG applications as colleges they would most likely attend. To do this, we requested the grade point average (GPA) and Scholastic Aptitude Test (SAT) scores for 290 of the eligible applicants—those who had recently graduated from a D.C. public high school—from D.C. public school officials and obtained data for some of these graduates. We compared the available data on the D.C. public school students to GPA and SAT data we obtained for average freshmen at the 62 institutions these applicants were interested in attending from Barron’s Profiles of American Colleges, 2001; Peterson’s 4 Year Colleges, 2001; and Peterson’s 2 Year Colleges, 2001. To determine whether access barriers may have existed at the 62 institutions, we obtained data on the presence of minority outreach programs and the use of out-of-state enrollment caps from a college survey that we developed as part of our review. To further identify barriers to college access, we sought to determine why the eligible applicants did not use the grant. To do this, we developed and administered a survey for the parents of all 516 eligible applicants who did not participate in the TAG Program. We chose to survey parents rather than the eligible applicants, because current contact information for the parents was readily available. We received responses from 42 percent of the parents surveyed, and from these responses we obtained general information on the reasons these applicants did not use the tuition assistance grant. To obtain information on how student enrollment at UDC changed during the initial year of the TAG Program and what types of students UDC and TAG serve, we obtained student data from UDC, including enrollment numbers, age, enrollment status, and information on high schools from which UDC students graduated. To compare the average UDC student with the average TAG recipient, we analyzed data for TAG recipients, including age, enrollment status, and high schools attended, who entered their freshmen year of college in academic year 2000-01. To determine whether program administration issues exist that could potentially hinder program operations, we interviewed the four financial aid directors from the institutions that initially voiced concerns regarding the administration of the TAG Program—the University of California, the University of Florida, the University of Michigan, and the State University of New York. We also conducted a survey of 140 institutions that administered the grant in academic year 2000-01. We received responses from 84 percent of the institutions in our survey. In addition, to develop an understanding of the program operations and procedures, we interviewed managers and staff of the TAG Program office as well as officials in the office of the D.C. Chief Financial Officer. We also interviewed U.S. Department of Education officials to obtain their views on the TAG Program. Furthermore, we reviewed 75 randomly selected files of ineligible applicants to determine whether TAG officials had conducted a full eligibility review of applicants who had listed ineligible colleges or universities on their applications. In addition to those named above, the following individuals made important contributions to this report: Cathy Hurley, Ben Jordan, James Rebbe, Jay Smale, and James P. Wright. | Twenty-one percent of grant-eligible applicants who did not use the District of Columbia's tuition assistance grant (TAG) funding to attend a participating college or university may have encountered such barriers as college entrance requirements and the absence of minority outreach programs. Whether enrollment caps at colleges posed a barrier for applicants is unclear. In the program's first year, 516 of the nearly 2,500 eligible applicants did not use the grants. About 21 percent of the institutions in which applicants expressed interest restrict the number of out-of-state students that they will accept, although the extent to which this played a role in limiting access to these institutions is unclear. Enrollment at the University of the District of Columbia (UDC) changed little during the TAG program's first year. The TAG program and UDC appeared to serve different freshmen populations, which may account for the TAG program's minimal impact on UDC enrollment. Although concerns about TAG program administration were largely resolved with the revision of program regulations in December 2000, other administrative issues may hinder program operations, including the determination of applicant eligibility and the distribution of information on institutions participating in the program. |
The Postal Reorganization Act of 1970 granted the Postal Service its independent status and allowed the Service to develop its own purchasing rules and regulations. Although it had the authority for greater flexibility, the Service followed prevailing federal practices until 1988 when it adopted a new procurement manual designed to take advantage of the best public and private purchasing practices. Compared to federal purchasing requirements, the Service’s rules were designed to give contracting officers more discretion in meeting the needs of operating customers. For example, the federal policy of “full and open competition,” as required for most federal contracts by the Competition in Contracting Act of 1984 was replaced with a policy of “adequate competition,” and postal contracting officers may limit competition to selected or prequalified offerors. In 1991, we reported that there had been no problems from the adoption of the new purchasing rules but that they also had not been used enough to be declared a success. After a 1986 conviction of one of the Postal Board of Governors for fraud in a major purchase of automation equipment, the Senate and House postal oversight committees requested that we examine Postal Service purchasing practices. We reported that while the Postal Service routinely applied accepted internal controls to deter fraud, this did not guarantee that purchases could not be compromised by collusion or errors in judgment. Total purchases by the Postal Service in 1994 amounted to $4.6 billion, including $2.4 billion for facilities and equipment, and $2.2 billion for transportation. The proposed purchase of a capital and expense (i.e., noncapital) item costing $7.5 million or more is to be reviewed by a Capital Investment Committee, made up of Service executives and the Postmaster General. The Postal Board of Governors must also review proposed capital purchases costing $10 million or more. Board of Governors’ approval is not required for purchases of expense items, including supplies or services such as transportation. However, if the expense item purchase exceeds $10 million, the Board of Governors is to receive an “information letter” on the purchase. Service officials said that the Board of Governors may review certain purchases because of their significance or unusual nature, regardless of the amount. In 1994, the Board of Governors reviewed 16 projects. Three of the seven purchases we reviewed involved ethics problems. The Postal Service is covered by the Ethics in Government Act of 1978. Under this act, the Service is required to provide an ethics program to implement the act and related governmentwide regulations. The Office of Government Ethics periodically reviews the adequacy of executive agency ethics programs, including the Service. Concerned about the adequacy of the Postal Service’s procurement program, the Chairman of the former House Post Office and Civil Service Committee asked us to determine (1) if previously reported problems with several Postal Service purchases were due to any underlying causes that should be addressed through a legislative solution, and if not, (2) whether additional procedural safeguards could be employed by the Service to minimize future occurrences of such problems. Following the 1995 congressional reorganization and the elimination of the House Post Office and Civil Service Committee, we agreed to report to the newly formed Subcommittee on the Postal Service of the House Government Reform and Oversight Committee. To address the objectives, we reviewed our and other published reports on these purchases, as well as contract files and associated records. We discussed the purchases with the Service’s purchasing personnel and executives and collected general purchasing and performance data. To understand the oversight process for major purchases, we obtained information from and interviewed responsible officials at the Board of Governors, the Capital Investment Committee, and the Postal Inspection Service. We also discussed recent developments in federal purchasing reform and contract oversight and compliance with the former Assistant Administrator for Acquisition Policy of the General Services Administration. Because three of the seven purchases involved ethics problems, we reviewed recent Office of Government Ethics reports on the Service’s ethics program and discussed the program with Office of Government Ethics officials. The Postmaster General provided written comments on a draft of this report. His comments are discussed on page 11 and reprinted in appendix II. Our work was done between August 1994 and May 1995 in accordance with generally accepted government auditing standards. The problems encountered in the seven purchases we reviewed had various causes, but certain practices recurred. These included officials agreeing to forgo required checks and reviews in the purchase process and failing to resolve conflict-of-interest situations, both real and apparent. Problems with real estate transactions were apparently due to shortcutting important integrity safeguards through a mistaken sense of urgency. Contributing factors were the belief, not always correct, that other parties were interested in the properties or that offers to sell in areas where suitable sites were scarce were good opportunities. For example, in the St. Louis case, space was needed to house a data processing center that was being displaced from the main post office to make room for automation equipment. Because of an internal breakdown in communication, the facilities officials responsible for finding a new site were not aware of the moving date until a few months beforehand. Outright purchase would normally have been used; however, because capital funds were not available at the time, field real estate specialists arranged to acquire the building through a lease/purchase agreement. The Capital Investment Committee approved the project, which was then canceled by the Chairman of that Committee because of the General Counsel concerns about the financing arrangements between the Service and the building’s seller. The next day the real estate specialists were directed to immediately, and without time to prepare, renegotiate the purchase from lease/purchase to an outright purchase. Congressional and Postal Inspection Service reports on this purchase further disclosed the following: The Capital Investment Committee was not given an opportunity to approve the purchase. The purchase was seriously misrepresented before the Board of Governors, including erroneous information that the Service needed to close the deal quickly because another party was anxious to buy the building. The Postal Service paid $12.5 million to the seller who had acquired the building for $4 million earlier the same day. In another case, the Postal Service accepted an unsolicited offer for purchase of a building in the Bronx, NY, on the basis that suitable sites in the area were hard to find, and the building presumably could be used as a general mail facility to solve severe mail processing capacity limitations in the area. However, the building was acquired before complete suitability assessments were made. The building was later determined to be unusable for its intended purpose because it did not have sufficient room for automated mail processing equipment. The building is used for Priority Mail and other mail processing from the main post office. In December 1995, when commenting on our proposed report, Service officials said that the Capital Investment Committee had approved $5 million for design work on the building and that the Postmaster General’s and Board of Governors’ approval will also be requested. When most of the seven purchases occurred, the purchasing function was not organized in a way that fostered contracting officers’ independence. This was according to a 1993 study by the Logistics Management Institute entitled “Consolidating Postal Contracting,” which was commissioned by the Postal Service. At the time, the contracting function was fragmented into independent groups for purchasing, transportation, and facilities. This structure, according to the study, led to inconsistent accountability over the performance and integrity of the contracting process. The study also found that contracting officers were not sufficiently independent because many of them reported directly to those officials who required the contracted products or services. Not only did this make it extremely difficult for contracting officers to exercise independent judgment and follow Postal Service policies, but the soundness of contracting decisions could be subordinated to their timeliness. No compliance or contract file reviews of major pending purchases were being made in any of the three groups, and contracting personnel in facilities and transportation were inadequately trained to handle their responsibilities. In some cases, contracting was a secondary duty assigned to individuals with other program responsibilities. The study recommended that the Postal Service establish a single purchasing executive, reporting to the Postmaster General, with management authority over the three separate purchasing groups. The study also stated that the new purchasing executive could resolve other weaknesses, such as training and the independence of the contracting workforce. Three of the seven purchases involved ethics violations. The most severe, discussed below, were two similar instances in which the contracting officer failed to correct situations where individuals had financial relationships with the Postal Service and with certain offerors. In the 1992 award of a 10-year contract for air transportation, a consultant, who was helping the Postal Service review the proposals, informed the Service that he had a job offer that he might accept from one of the offerors to the solicitation. The Service’s General Counsel advised the contracting officer that the consultant should either decline the job offer or be removed from the evaluation team. The contracting officer instead approved an arrangement whereby the consultant should merely remain out of contact with the offeror until after the contract was awarded. The offeror won the contract, which was then challenged by a losing offeror. The court set aside the contract because of the conflict of interest that existed when the proposals were evaluated. The Service incurred extra costs of $10 million, paid to the original winning offeror for start-up costs incurred, and $8 million annually for a more costly replacement contract. In another case, during the development and purchase of automated barcode sorting systems, the Postal Service first retained a consultant in 1990 for software development. Shortly thereafter, the consultant sought permission from the Service to offer related support to the barcode system supplier. The Service responded by inserting conflict-of-interest clauses into its contract with the consultant that prohibited him from entering into contracts with the system supplier. However, the Service did not enforce the clauses, and the consultant was retained under contract by the supplier. Despite advice from the Service’s General Counsel that the contract with the consultant should not be renewed, the arrangement continued while the Service tested and solicited proposals for upgraded barcode sorters. A contract was awarded to the same supplier in March 1993. The losing offeror claimed it had been put at a competitive disadvantage and damaged by the dual relationship of the consultant with the Service and the supplier. An arbitration panel agreed and ordered the Service to pay $22.2 million to the losing firm. According to the Office of Government Ethics, the Postal Service’s control of its ethics environment has been of concern. Since 1991, the Office has made three reviews of the Service’s program because of the Service’s persistent problems; typically, executive branch agencies are reviewed once every 5 years. In 1991, the Office reported that its recommendations from a 1987 report had not been implemented although the Service reported that actions had been taken to resolve those deficiencies. Improvements needed were timely collection and review of public financial disclosure statements, revisions to the confidential reporting system, development of a formal ethics education and training program, establishment of a program monitoring system, and additional staff resources. The long-standing problems in the Service’s ethics program were primarily attributed to a lack of strong support by top management and inadequate staff resources. The Office requested that the Service report its progress in correcting the deficiencies by March 1991 and every 60 days until the recommendations were implemented. In 1993, the Office reported that many of its earlier recommendations remained to be acted upon and that, while some progress was being made by ethics officials, overall the Postal Service did not have an effective ethics program. The General Counsel advised the Office in April 1993 that the Service had been unable to devote sufficient resources to the ethics program. As part of an overall downsizing of the Service, headquarters staffing dropped by about 30 percent from August 1992 through April 1993. On August 9, 1995, the Office reported that some improvements had been made but that more work was needed to develop an effective program. The Postal Service still had difficulty in administering a program that complied with applicable laws and regulations. All areas of the program were found to require improvement. The Office recommended that the Service ensure that written procedures for administering the public and private financial disclosure systems are prepared as required by the Ethics in Government Act of 1978, disclosure reports are filed in a timely manner, late filing fees are collected or that late filers request waivers from the Office, ethics orientation for new employees be improved to comply with the Office’s governmentwide ethics regulations, ethics officials improve their coordination with the Postal Inspection Service about the resolution of conflict-of-interest situations, and the Office is notified about conflict-of-interest violations that are referred to the Department of Justice. In an October 3, 1995, letter to the Office of Government Ethics, the Postal Service’s General Counsel expressed overall agreement with the recommendations and outlined actions taken or planned to address each of the Office’s recommendations to improve the Service’s ethics programs. According to the General Counsel, the preparation of written procedures for financial disclosure was a top priority and would be finished in early 1996. The General Counsel said that a backlog of unreviewed public financial disclosure reports had been eliminated, late-filed reports had been investigated, and procedures for ensuring timely filing of future reports and handling of late filing fees and related waivers were being considered. Other actions taken included an increase in ethics program staff resources by (1) adding two ethics positions under the General Counsel, (2) designating an ethics coordinator for each headquarters department whose duties include administering training and financial disclosure requirements, and (3) designating 170 ethics resource individuals in field units to handle routine questions. The General Counsel said actions were also taken to improve ethics awareness. These actions included (1) development of an introductory ethics orientation video, which was shown to about 700,000 employees nationwide in 1993 and 1994; (2) distribution of a letter from the Postmaster General to all postal employees in 1993, providing the names and telephone numbers of ethics advisors; and (3) training of up to 7,000 employees who filed financial disclosure reports each year in 1993, 1994, and 1995 to meet Office of Government Ethics regulations. The actions included steps to improve ethics awareness of contracting officers and other employees with significant procurement responsibilities, such as mandatory all-day ethics training for 1,100 such employees in 1993, and 2-1/2 hours of ethics training for the same number in 1994. Regarding the resolution of apparent conflict-of-interest cases, the General Counsel’s office and the Postal Inspection Service agreed to quarterly coordination meetings, and the Postal Service set up an ethics advisory council to help resolve possible conflict-of-interest situations. The General Counsel was not aware of any referrals of conflict-of-interest cases to Justice in the past 3 years. The seven purchases totaled about $1.33 billion. We estimate that the Postal Service expended about $89 million for penalties or unusable and marginally used property, portions of which could be recovered if the properties were leased or sold. The expended amount consists of $32 million in penalties to injured parties to compensate them for damages caused by the conflicts of interest during the awards for air transportation and automation equipment; $12.5 million for the St. Louis building, which as of August 1995, the Postal Service was in the process of trying to lease or sell; $14.7 million for a site in Queens, which is unusable due to contamination; $29.5 million for the Bronx building, which is essentially unusable for its intended purpose. In November 1993, in response to the previously mentioned 1993 study of purchasing practices, the Postal Service placed the three independent procurement groups under one purchasing executive to ensure more consistent control over purchasing operations. This official has established goals to better train, qualify, and educate contracting professionals to handle more abstract decisionmaking under the more flexible discretion they are allowed. Recognizing the need for additional review and other processes to reduce errors, the purchasing office plans to adopt additional higher levels of review, including requirements for contracting officers to document the policy and business rationales for the particular purchasing decisions. In keeping with presidential initiatives emphasizing performance reviews that focus more on results rather than conformance to regulations, the Postal Service’s purchasing office hopes to build better quality into its purchasing cycle. The purchasing office also recognizes the need for additional self-assessments within its purchasing office. Details of this approach are still under study, as is how the independence of contracting officers from those with program responsibility will ultimately be defined. Problems occurred in the purchasing function for the purchases we reviewed mainly because Postal Service officials circumvented internal controls to speed up the purchasing process and failed to adequately deal with known or potential ethics violations. We believe that the changes that the Service has made to improve major acquisition integrity are steps in the right direction. The consolidation of the three independent purchasing units under a single responsible purchasing executive should help ensure more consistent management of major purchases, as should the other plans to improve the purchasing process and the training and ethics awareness of purchasing personnel. The Office of Government Ethics’ recommendations, concurred in by the Service, are designed to ensure that improvement in the program continues through more consistent oversight and strong management support. If implemented, the Service’s actions should complement its other initiatives. The most well-designed purchase program can be compromised if officials choose to avoid controls to satisfy perceived operational exigencies, as occurred with many of the purchases that we reviewed. However, we believe that top management’s continued support of these reform initiatives could help improve procurement integrity and help prevent the recurrence of such problems. Responding to our report, the Postmaster General said that the consolidation of purchasing activities in 1993 was a significant step forward. He said that the Service is continuing with a number of improvements, including contracting officer qualification standards, enhanced training programs, improved methods of monitoring major purchases, and renewed emphasis on ethics awareness. He believes that the separation of contracting officers from operational organizations will result in an enhanced awareness of contractual and legal issues, as well as better overall decisionmaking. The Postmaster General recognized that the purchasing process had been compromised, not because of fundamental defects in the Postal Service’s purchasing policies, but because officials chose to deviate from those policies. He emphasized the need for the Service to have the purchasing flexibility envisioned in the Postal Reorganization Act of 1970 and that if errors in judgment or flaws in the purchasing methods are discovered, the Service will move rapidly to correct them and prevent any recurrence. A copy of the Postmaster General’s letter of December 18, 1995, is included as appendix II. We are sending copies of this report to the Postmaster General, the Postal Service Board of Governors, and other congressional committees that have responsibilities for Postal Service issues. Copies will also be made available to others upon request. The major contributors to this report are listed in appendix III. If you have any questions about this report, please call me on (202) 512-8387. The desire to secure a site for a general mail facility to resolve long-standing mail processing problems overrode environmental concerns and prudent financial management. Two sites were purchased when one was needed. The Phelps-Dodge site proved unusable because of hazardous waste contamination, and a provision requiring the seller to clean up the site before transfer of title was removed from the final purchase agreement. This left the Postal Service with a site that it cannot use or sell without additional costs or concessions. Cleanup of the site was suspended in 1987 when contamination was found to be more widespread than expected. The Postal Service was in litigation to get the seller to clean up the site so that it can be sold. The Postal Service accepted an unsolicited offer for this building before fully assessing its suitability and performing a cash flow analysis. A building was needed to alleviate severe mail processing problems in the area, and reportedly no other such sites or buildings were on the market. The building was subsequently deemed unusable as a general mail facility because it did not have enough room for automated sorting equipment. The building housed Priority Mail processing and other operations from the main post office. As a courtesy, Postal Service officials accepted meals and travel from a German firm affiliated with the successful offeror. These actions violated the law and governmentwide and postal standards of conduct. While the actions created the appearance of a conflict of interest, they were not sufficient to invalidate the award. Equipment delivery under the contract was scheduled to be completed by the end of fiscal year 1997. In selecting the location for this hub, the Postal Service did not give the same weight to the selection criteria that it stated in the solicitation. While the winning location (Indianapolis) was a top competitor for the award, because of this and other deficiencies in the evaluation process, we were unable to determine which competitor would have won if the evaluation had been consistent with the request for proposal. Air hub was in service. A breakdown in the review and approval process for this real estate purchase caused procurement safeguards to be circumvented and many failures to occur. The most notable was that the Postal Service paid a real estate development firm $12.5 million for a building that the firm had acquired earlier the same day for $4 million. The building temporarily housed the data processing center. The Service planned to rent or sell the building. (continued) Contrary to the advice of the Postal Service’s legal department, the contracting officer failed to resolve a conflict of interest on the part of an individual who helped evaluate the contract proposals and at the same time had a job offer pending from the successful offeror. As a result of the conflict of interest, the award was set aside by the courts and a replacement contract was awarded to one of the unsuccessful offerors. The Service paid $10 million to the original winning offeror to settle its claim under the contract, which was then set aside. Also, the new contract cost $8 million more annually than the old contract (both were for 10 years). Air service was in operation. The contracting officer failed to correct an apparent conflict-of-interest situation involving an individual who was a technical consultant on this equipment to both the Service and the winning offeror. The dispute was submitted to an arbitration panel, which awarded $22.2 million in damages to the unsuccessful offeror. Final delivery under the contract was scheduled for 1996. POSTAL SERVICE: Decisions to Purchase Two Properties in Queens, New York (GAO/GGD-92-107BR, July 17, 1992). V. Bruce Goddard, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed whether changes are needed in the Postal Service's purchasing program, focusing on whether: (1) certain problem purchases were due to some underlying causes that should be addressed through legislation; and (2) the Service should implement additional procedural safeguards to minimize future occurrences of such problems. GAO found that: (1) the problems encountered during the seven purchases reviewed were due to Postal officials' poor judgment, circumventions of existing internal controls, and failure to resolve conflicts of interest; (2) many contracting officers could not exercise independent judgment, since they reported directly to those officials who required the products or services; (3) the Service has taken action to increase oversight and accountability over its purchasing process and to safeguard against such future occurrences; (4) in response to recommendations by the Office of Government Ethics, the Service has outlined actions it is taking to improve its ethics program which should help prevent the recurrence of such purchasing problems; (5) a formal ethics education and training program for contracting officers and personnel is underway; (6) the Service has established one purchasing executive with management authority over the three separate Postal purchasing groups; and (7) the Service plans to adopt a requirement for more explicit documentation of and rationale for contracting officers' business and policy actions. |
In 17 of the 31 new areas where agencies may be able to achieve greater efficiency or effectiveness, we found evidence of fragmentation, overlap, or duplication among federal programs or activities. As described in table 1, these programs or activities cover a wide range of federal functions and missions. We consider programs or activities to be fragmented when more than one federal agency (or more than one organization within an agency) is involved in the same broad area of national need and opportunities may exist to improve how the government delivers services. We identified fragmentation in multiple programs we reviewed, including the following: Combat Uniforms: We found that the Department of Defense’s (DOD) fragmented approach to developing and acquiring combat uniforms could be more efficient. Further, DOD has not taken steps to ensure equivalent levels of uniform performance and protection for service members conducting joint military operations in different uniforms, potentially exposing them to increased risk on the battlefield. Since 2002, the military services have shifted from using two camouflage patterns to seven service-specific camouflage uniforms with varying patterns and colors. Although DOD established a board to help ensure collaboration and DOD-wide integration of clothing and textile activities, we continue to identify inefficiencies in DOD’s uniform acquisition approach. For example, we found that none of the services had taken advantage of opportunities to reduce costs through partnering on inventory management or by collaborating to achieve greater standardization among their various camouflage uniforms. We have identified several actions DOD should take to realize potential efficiencies. In addition, DOD reported that it could save up to $82 million in development and acquisition cost savings through increased collaboration among the military services. These actions include directing the Secretaries of the military departments to actively pursue partnerships for the joint development and use of uniforms. Renewable Energy Initiatives: Federal support for wind and solar energy, biofuels, and other renewable energy sources has increased significantly in recent years. Specifically, third-party estimates indicate that federal spending over the 7-year period from 2002 through 2008 averaged about $4 billion per year and increased to almost $15 billion in fiscal year 2010, in part because of additional spending through the American Recovery and Reinvestment Act of 2009. We found that federal support for renewable energy is fragmented, as 23 agencies and their 130 subagencies implemented hundreds of initiatives in fiscal year 2010. We could not comprehensively assess the potential for overlap or duplication among these nearly 700 renewable energy initiatives, because existing agency information was not sufficiently complete to allow for such an assessment. However, fragmentation can be a harbinger of potential overlap or duplication. For example, we assessed federal wind energy initiatives and found that most of the 82 wind-related initiatives that we examined had overlapping characteristics, and several of them have provided duplicative financial support to deploy wind energy projects. Such duplicative federal financial support may not have been needed in all cases for the projects to be built. To help ensure effective use of financial support, we suggested that the Department of Energy and the U.S. Department of Agriculture, to the extent possible within their statutory authority, assess and document whether the financial support of their initiatives is needed when considering applications. In some of the programs and activities where there was fragmentation, we also found instances of overlap. Overlap occurs when multiple agencies or programs have similar goals, engage in similar activities or strategies to achieve them, or target similar beneficiaries. We found overlap among federal programs or initiatives in a variety of areas, such as joint veterans and defense health care services, export promotion activities, drug abuse prevention and treatment programs, and veterans’ employment and training programs, as well as the following: Department of Homeland Security Research and Development: Within the Department of Homeland Security (DHS), we found at least six department components involved in research and development activities. We examined 47 research and development contracts awarded by these components and found 35 instances among 29 contracts in which the contracts overlapped with activities conducted elsewhere in the department. Taken together, these 29 contracts were worth about $66 million. In one example of the overlap we found that two DHS components awarded five separate contracts that each addressed detection of the same chemical. While we did not identify instances of unnecessary duplication among these contracts, DHS has not developed a policy defining who is responsible for coordinating research and development and what processes should be used to coordinate it, and does not have mechanisms to track research and development activities at DHS that could help prevent overlap, fragmentation, or unnecessary duplication. We suggested that developing a policy defining the roles and responsibilities for coordinating research and development, and establishing coordination processes and a mechanism to track all research and development projects could help DHS mitigate existing fragmentation and overlap, and reduce the risk of unnecessary duplication. Overlap and fragmentation among government programs or activities can lead to duplication, which occurs when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries. Our 2013 report includes several areas where we identified potentially duplicative federal efforts, including the following: Medicaid Program Integrity: We identified duplication in the Medicaid Integrity Program, which provides federal support and oversight of state programs. In particular, the use of two sets of federal contractors in the National Medicaid Audit Program—one contractor to review states’ paid claims in order to identify potential aberrant claims or billing anomalies and another contractor to audit such aberrant claims—increased inefficiencies in data analysis and led to duplication of effort. To address this duplication, we suggested that the Centers for Medicare & Medicaid Services (CMS) merge certain functions of the federal review contractors and federal audit contractors and discontinue the annual state program integrity assessment to eliminate or avoid duplicative activities. Partly in response to our suggestion, CMS is not renewing its federal review contractors when their contracts expire this year, which has the potential for saving $15 million or more. In addition to these 17 areas of fragmentation, overlap, and duplication in federal efforts, we present 14 areas in which we identified opportunities to reduce the cost of government operations or enhance revenue collections for the Treasury. These opportunities for executive branch or congressional action exist in a wide range of federal government missions (see table 2). Among the 14 areas of opportunity to reduce costs or enhance revenue identified in our 2013 annual report are the following examples of opportunities for executive branch agencies or Congress to take action to address the issues we reported: Medicare Advantage Quality Bonus Payment Demonstration: We report concerns about CMS’s Medicare Advantage Quality Bonus Payment Demonstration, which is expected to cost $8.35 billion over 10 years, most of which will be paid to plans with average performance. Medicare Advantage provides health care coverage through private health plans offered by organizations under contract with CMS. The agency’s stated research goal for the demonstration is to test whether an alternative bonus structure leads to larger and faster annual quality improvement for Medicare Advantage plans. We found that the demonstration’s design precludes a credible evaluation of its effectiveness because it lacks an appropriate comparison group needed to isolate the demonstration’s effects, and because the demonstration’s bonus payments are based largely on plan performance that predates the demonstration. Based on these concerns, we suggest that Department of Health and Human Services (HHS) cancel the Medicare Advantage Quality Bonus Payment Demonstration. In addition, the demonstration’s design raises legal concerns about whether it falls within HHS’s demonstration authority. Although the demonstration is now in its second year, HHS still has an opportunity to achieve significant cost savings—about $2 billion, based on GAO’s analysis of CMS actuaries’ estimates—if it cancels the demonstration for 2014. that authority, to correct errors during tax return processing. These and other actions we have identified could help the federal government increase revenue collections by billions of dollars. We have previously reported that the government would generate an additional $3.8 billion per year if service and enforcement improvements reduced the tax gap by 1 percent. Department of Energy’s Isotope Program: Opportunities may also exist for the Department of Energy (Energy) to generate additional revenue by increasing the price for isotopes that it sells to commercial customers. Energy’s Isotope Development and Production for Research and Applications program sells isotopes to commercial customers for a variety of uses, such as medical procedures and radiation detection equipment. To achieve its mission, the program relies on annual appropriations and revenues from isotope sales. Although revenues from sales of isotopes alone totaled over $25 million in fiscal year 2012, we found that Energy may be forgoing revenue because it is not using thorough assessments to set prices for commercial isotopes. Thus, we suggested that Energy examine the prices it sets for commercial isotopes to determine if prices can be increased. With the issuance of our 2013 report, we have completed a systematic examination to identify major instances of fragmentation, overlap, or duplication across the federal government. Through our three annual reports, we have identified 162 areas in which there are opportunities to reduce fragmentation, overlap, or duplication or to achieve cost savings or enhance revenue. Within these 162 areas, we identify approximately 380 actions that the executive branch or Congress could take to address the issues we identified. These areas span a wide range of government missions, covering activities within all 15 cabinet-level executive departments and 17 other federal entities (see fig. 2). Collectively, if the actions we suggest are implemented, the government could potentially save tens of billions of dollars annually. Our 2013 annual report completes our 3-year systematic examination across the federal government to identify major instances of fragmentation, overlap, or duplication. Our systematic examination required a multiphased approach. First, we reviewed the budget functions of the federal government representing nearly all of the overall federal funds obligated in fiscal year 2010.budget resources by national need (such as National Defense, Energy, or Agriculture), and instances in which multiple federal agencies obligate funds within a particular budget function may indicate potential duplication or cost savings opportunities (see fig. 3). Although this type of analysis cannot answer the question of whether overlap or duplication exists, it can help in the selection of areas for further investigation. Second, we reviewed key agency documents, such as strategic plans, performance and accountability reports, and budget justifications, as we have found that when multiple executive branch agencies have similar missions, goals, or programs, the potential for fragmentation, overlap, or duplication exists. Third, we reviewed key external published sources of information, such as reports published by the Congressional Budget Office, Inspectors General, and the Congressional Research Service, as well as the President’s budgets, to identify potential overlap and duplication among agency missions, goals, and programs. In addition to the new actions identified for our 2013 annual report, we have continued to monitor the progress that the executive branch agencies and Congress have made in addressing the issues we identified in our 2011 and 2012 annual reports. In these reports, we identified approximately 300 actions that the executive branch and Congress could take to achieve greater efficiency and effectiveness. We evaluated progress by determining an “overall assessment” rating for each area and an individual rating for each action within an area (see figures 4 and 5). We found that the executive branch agencies and Congress have made progress in addressing the 131 areas we identified in 2011 and 2012. As of March 6, 2013, the date we completed our audit work, about 12 percent of the 131 overall areas were addressed; 66 percent were partially addressed; and 21 percent were not addressed. Within these areas, about 21 percent of the approximately 300 individual actions were addressed, 48 percent were partially addressed, and 28 percent were not addressed. According to our analysis, as of March 6, 2013, of the 249 actions identified in 2011 and 2012 that were directed to executive branch agencies, 22 percent were addressed and 57 percent were partially addressed. Examples of the progress that executive branch agencies have made include the following: Overseas Defense Posture: In our 2012 annual report, we suggested the Secretary of Defense direct appropriate organizations within DOD to complete a business case analysis, including an evaluation of alternative courses of action, for the strategic objectives that have to this point driven the decision to implement tour normalization in South Korea—that is, the initiative to extend the tour length of military service members and move their dependents to South Korea. Based on the resulting business case analysis, DOD officials stated that United States Forces Korea determined that the tour normalization initiative was not affordable. This decision not to move forward with the tour normalization initiative resulted in cost avoidance of $3.1 billion from fiscal years 2012 through 2016. Air Force Food Service: In our 2012 annual report, we suggested that the Air Force review and renegotiate food service contracts to better align with the needs of installations. According to Air Force officials, after reviewing the food service contracts at eight installations, the Air Force renegotiated their contracts for a total savings of over $2.5 million per year. In addition, according to Air Force officials, all food service contracts were validated again during fiscal year 2012 for additional savings of over $2.2 million per year. Air Force officials told us that the Air Force will review contracts annually for areas where costs can be reduced. Information Technology Investment Management: In our 2012 annual report, we suggested that the Director of the Office of Management and Budget require federal agencies to report the steps they take to ensure that their information technology investments are not duplicative in their annual budget and information technology investment submissions. The Office of Management and Budget’s (OMB) fiscal year 2014 budget guidance requires agencies to identify duplicative or low value investments in information technology and make plans to consolidate or eliminate these investments. Reducing duplicative and low value investments could save millions of dollars. Congress has also taken steps to address some of our suggested actions. As of March 6, 2013, 20 percent of the 50 actions directed to Congress in our 2011 and 2012 annual reports were addressed and 12 percent were partially addressed. Examples of progress that Congress has made include the following: Domestic Ethanol Production: In our 2011 annual report, we suggested that Congress address duplicative federal efforts directed at increasing domestic ethanol production, which could reduce revenue losses by more than $5.7 billion annually. To reduce these revenue losses, we suggested that Congress consider whether revisions to the ethanol tax credit were needed and we suggested options to consider, including allowing the volumetric ethanol excise tax credit to expire at the end of 2011. Congress allowed the tax credit to expire at the end of 2011, which ended the ethanol tax credit for fuel blenders that purchase and blend ethanol with gasoline. Surface Transportation: In our 2011 annual report, we suggested that Congress address the need for a more goal-oriented approach to surface transportation that is less fragmented and more accountable for results. Specifically, we found that over the years, in response to changing transportation, environmental, and societal goals, federal surface transportation programs grew in number and complexity to encompass broader goals, more programs, and a variety of program approaches and grant structures. This increasing complexity resulted in a fragmented approach as five Department of Transportation agencies administer over 100 separate programs with separate funding streams for highways, transit, rail, and safety functions. The Moving Ahead for Progress in the 21st Century Act, signed into law in July 2012, reauthorized the nation’s surface transportation programs through the end of fiscal year 2014. The act addressed fragmentation by eliminating or consolidating programs, and made progress in clarifying federal goals and roles and linking federal programs to performance to better ensure accountability for results. While the executive branch and Congress have made some progress in addressing the issues that we have previously identified, additional steps are needed to address the remaining areas to achieve associated benefits. A number of the issues are difficult to address, and implementing many of the actions identified will take time and sustained leadership. To help maintain attention on these issues, we recently launched GAO’s Action Tracker, a publicly accessible website containing the status of actions suggested in our first three reports. The website allows executive branch agencies, Congress, and the public to track the progress the government is making in addressing the issues we have identified. We will add areas and suggested actions identified in and future reports to GAO’s Action Tracker and periodically update the status of all identified areas and activities. The President’s Fiscal Year 2014 Budget submission makes several proposals that appear consistent with our suggested actions. Many of these proposals require some legislative action and therefore, Congress may wish to examine the following areas in its oversight: Science, Technology, Engineering, and Mathematics (STEM): In our 2012 annual report, we found that federal agencies obligated $3.1 billion in fiscal year 2010 to 209 STEM education programs administered by 13 federal agencies, and that 173 of these (83 percent) of these programs overlapped to some degree with at least 1 other program in that they offered similar services to similar target groups in similar STEM fields to achieve similar objectives. To minimize this overlap, we suggested that strategic planning by executive branch agencies is needed to better manage overlapping programs across multiple agencies STEM. In an effort to minimize both fragmentation and overlap in STEM programs, the President’s Fiscal Year 2014 Budget submission proposes consolidating or eliminating 114 programs and redirecting nearly $180 million from consolidated programs to three agencies: Education, the National Science Foundation, and the Smithsonian Institution. These agencies would coordinate efforts with the activities and assets of other federal science agencies. Catfish Inspection: In our 2013 annual report, we found that when U.S. Department of Agriculture’s Food Safety and Inspection Service begins the catfish inspection program as mandated in the Food, Conservation, and Energy Act of 2008, the program will duplicate work already conducted by the Food and Drug Administration and by the National Marine Fisheries Service. For example, as many as three agencies—Food and Drug Administration, Food Safety and Inspection Service, and the National Marine Fisheries Service—could inspect facilities that process both catfish and other types of seafood. To avoid this duplication, we suggest that Congress repeal this provision of the act, which could save millions of dollars each year. The President’s Fiscal Year 2014 Budget submission proposes the elimination of the U.S. Department of Agriculture’s catfish inspection program. Similarly, S. 632 and H.R. 1313, introduced on March 21, 2013, would eliminate USDA’s catfish inspection (and catfish grading) program. As of May 8, 2013, the bills were pending in committees of jurisdiction. Farm Direct Payments: In our 2011 annual report, we found that reducing or eliminating fixed annual payments to farmers—which are known as direct payments and which farmers receive even in years of record farm income—could achieve cost savings of as much as $5 billion annually. We suggested that Congress consider reducing or eliminating direct payments by (1) lowering payment or income eligibility limits; (2) reducing the portion of a farm’s acres eligible for the payments; or (3) terminating or phasing out direct payments. The President’s Fiscal Year 2014 Budget submission proposes eliminating direct payments to farmers. Economic Development: In our 2011 annual report, we found that there was fragmentation and overlap among 80 economic development programs at four agencies— the Department of Commerce, the Department of Housing and Urban Development, the Small Business Administration, and the U.S. Department of Agriculture—in terms of the economic development activities that they are authorized to fund. We suggested, among other things, that the agencies further utilize promising practices for enhanced collaboration, such as seeking more opportunities for resource- sharing across economic development programs with shared outcomes and identifying ways to leverage each program’s strengths to improve their existing collaborative efforts. The agencies have taken steps to address this action, which we consider partially addressed, including entering into a number of formal agreements that are intended to help enhance and sustain collaboration. In addition, the administration has initiated steps that provide the agencies with a mechanism to work together to identify additional opportunities to enhance collaboration among programs. The President’s Fiscal Year 2014 Budget submission also states that the President will again seek reorganization authority and use such authority to consolidate the economic and business development activities in the Departments of Commerce, Agriculture, Health and Human Services, and the Treasury, as well as the Small Business Administration, into a new department with a focused mission to foster economic growth and spur job creation. Crop Insurance: In our 2013 annual report, we found that applying limits on premium subsidies to individual farmers participating in the federal crop insurance program, similar to the payment limits for other farm programs, could save billions of federal dollars over 5 years. We suggested Congress consider either limiting the amount of premium subsidies that an individual farmer can receive each year—as it limits the amount of payments to individual farmers in many farm programs—or reducing premium subsidy rates for all participants in the crop insurance program, or both limiting premium subsidies and reducing premium subsidy rates. The President’s Fiscal Year 2014 Budget submission proposes to reduce farmers’ premium subsidies by 3 percentage points for those policies that are currently subsidized by more than 50 percent, which is expected to save about $4.2 billion over 10 years. In addition, the President’s Fiscal Year 2014 Budget submission proposes to reduce farmers’ premium subsidies by 2 percentage points on policies that provide a higher indemnity if the commodity prices are higher at harvest time than when the policy was purchased, which is expected to save about $3.2 billion over 10 years. Renewable Energy Initiatives: In our 2013 annual report, we suggested that the Secretaries of Energy and Agriculture should, to the extent possible within their statutory authority, formally assess and document whether the incremental financial support of their initiatives is needed in order for applicants’ projects to be built, and take this information into account in determining whether, or how much, support to provide. The President’s Fiscal Year 2014 Budget submission does not include funding for the High Energy Cost Grant Program, administered by the Department of Agriculture’s Rural Utilities Service—one of the programs we identified that has provided duplicative support. This proposed elimination, if implemented, could help to reduce the potential for duplicative support. Congress has also taken additional actions that are consistent with those we have identified in our previous reports. For example, in our 2011 and 2013 annual reports, we cited numerous information technology areas in which duplication could be minimized or cost savings achieved across the federal government and made a number of recommendations to address these issues. In fiscal year 2013, federal agencies reported to OMB that approximately $74 billion was budgeted for information technology. On March 18, 2013, the Federal Information Technology Acquisition Reform Act (H.R. 1232) was introduced to eliminate duplication and waste in information technology acquisition and management. Among other things, the bill requires a governmentwide inventory of information technology assets to identify duplicative or overlapping investments. As of May 8, 2013, the bill was reported favorably to the full House. Identifying, preventing, and addressing fragmentation, overlap, and duplication within the federal government is challenging. These are difficult issues to address because they may require agencies and Congress to re-examine within and across various mission areas the fundamental structure, operation, funding, and performance of a number of long-standing federal programs or activities with entrenched constituencies. Compounding these challenges is the lack of a comprehensive list of federal programs, reliable and complete funding information, and regular performance results and information. Without knowing the full range of programs involved or the cost of implementing them, gauging the magnitude of the federal commitment to a particular area of activity or the extent to which associated federal programs are duplicative is difficult. Addressing these issues will require sustained attention by the executive branch agencies and the Congress. In the majority of cases, executive branch agencies have the authority to address the issues we identified. However, in other cases, Congress will need to be involved through their legislative and oversight activities. Such oversight is critical to addressing these issues. The performance planning and reporting framework originally put into place by GPRA, and significantly enhanced by the GPRA Modernization Act of 2010, provides important tools that help the Congress and the executive branch clarify desired outcomes, address program performance spanning multiple organizations, and facilitate future actions to reduce fragmentation, overlap, and duplication. However, realizing the intent of the GPRA Modernization Act for assessing government performance and improvement and reducing fragmentation, overlap, and duplication will require sustained oversight of implementation. To assist Congress with this oversight, the act includes provisions requiring us to review its implementation at several critical junctures. For example, we are to report by June 2013 on initial implementation of the act’s planning and reporting requirement and recommendations for improving implementation. We are also to evaluate how implementation is affecting performance management at federal agencies to improve the efficiency and effectiveness of agency programs, among other things, by September 2015, and again in September 2017. To provide more timely and useful information, we have issued a number of reports over the past 2 years (1) supporting congressional involvement in and oversight of agency performance improvement efforts, and (2) reviewing the executive branch’s implementation of key provisions of the act.from these reports along with the results of our most recent survey of federal managers on the implementation of key performance management practices across government—the fifth such survey we have undertaken since 1997. Executive branch agencies have the authority needed to address the majority of the actions we identified in our three reports. Of the approximately 380 actions that we have suggested, 317 were directed to executive branch agencies. Given that the areas identified extend across the government and that we found a range of conditions among these areas, we suggest a similarly wide range of actions for the executive branch to consider. The executive branch agencies could address many of the issues we identified through improving planning, better measuring of performance, improving management oversight, and increasing collaboration. These actions are largely consistent with the tools and principles put in place by GPRA and the GPRA Modernization Act. Given the crosscutting policy areas included in our annual reports, planning for the outcomes to be achieved is important in helping federal agencies address challenges, particularly those related to fragmentation, overlap, or duplication. A focus on outcomes is a first step to then determining how all of the activities that contribute to an outcome, whether internal or external to an agency, should be aligned to accomplish results. In our annual reports, we identified multiple instances of where better planning could help reduce the potential for overlap or duplication. For example, as we have already noted, strategic planning is needed to better manage overlapping STEM programs across multiple agencies. By taking this and other actions to increase efficiency and effectiveness, the administration could reduce the chance of investing scarce government resources without achieving the greatest impact in developing a pipeline of future workers in STEM fields. Additionally, we reported that a total of 31 federal departments and agencies collect, maintain, and use geospatial information—information linked to specific geographic locations that supports many government functions, such as maintaining roads and responding to natural disasters. OMB and the Department of Interior created a number of strategic planning documents and guidance to encourage more coordination of geospatial assets, reduce needless redundancies, and decrease costs. Nevertheless, we found that the Federal Geographic Data Committee— the committee that was established to promote the coordination of geospatial data nationwide—and selected federal departments and agencies had not effectively implemented the tools that would help them to identify and coordinate geospatial data acquisitions across the government. As a result, the agencies have made duplicative investments and risk missing opportunities to jointly acquire data. Furthermore, although OMB has oversight responsibilities for geospatial data investments, it does not have complete and reliable information to identify potentially duplicative investments. Better planning and implementation among federal agencies could help reduce duplicative investments and provide the opportunity for potential savings of millions of dollars. As this example highlights, creating a comprehensive list of programs along with related funding information is critical for identifying potential fragmentation, overlap, or duplication among federal programs or activities. Currently, no comprehensive list exists, nor is there a common definition for what constitutes a federal “program,” which makes it difficult to develop a comprehensive list of all federal programs. The lack of a list, in turn, makes it difficult to determine the scope of the federal government’s involvement in particular areas and, therefore, where action is needed to avoid fragmentation, overlap, or duplication. We also found that federal budget information is often not available or not sufficiently reliable to identify the level of funding provided to programs or activities. For example, agencies could not isolate budgetary information for some programs because the data were aggregated at higher levels. Without knowing the full range of programs involved or the cost of implementing them, gauging the magnitude of the federal commitment to a particular area of activity or the extent to which associated federal programs are duplicative is difficult. The GPRA Modernization Act requires OMB to compile and make publicly available a comprehensive list of all federal programs, and to include the purposes of each program, how it contributes to the agency’s mission, and recent funding information. According to OMB, agencies currently use the term “program” in different ways, and OMB plans to allow them to continue to define programs in ways that reflect their particular facts and circumstances within prescribed guidelines. OMB expects 24 large federal agencies to publish an initial inventory of federal programs in May 2013. In future years, OMB plans to expand this effort to other agencies that are to update their inventories annually to reflect the annual budget and appropriations process. OMB also expects to enhance the initial program inventory by collecting related information, such as financing and related agency strategic goals. Performance measurement, because of its ongoing nature, can serve as an early warning system to management and a vehicle for improving accountability to the public. To help ensure that their performance information will be both useful and used by decision makers, agencies must consider the differing information needs of various users—including those in Congress. As we have previously reported, agency performance information must meet Congress’s needs for completeness, accuracy, validity, timeliness, and ease of use to be helpful for congressional decision making. Similarly, in our three annual reports, we reported that better evaluation of performance and results is needed for multiple federal programs and activities to help inform decisions about how to address the fragmentation, overlap, or duplication identified or achieve other financial benefits. For example: Employment and Training: In our 2011 annual report, we found that 44 of the 47 federal employment and training programs that we identified overlap with at least one other program—that is, they provide at least one similar service to a similar population. We also found that collocating services and consolidating administrative structures may increase efficiencies and reduce costs, but implementation can be challenging. In particular, an obstacle to achieving greater administrative efficiencies is that little information is available about the strategies and results of such initiatives. In April 2011, we reported that as part of its proposed Workforce Investment Act of 1998 reforms, the Administration proposed consolidating 4 employment and training programs administered by the Department of Education into 1 program. In addition, little is known about the incentives that states and localities have to undertake such initiatives and whether additional incentives are needed. As a result, we suggested that the Departments of Labor and Health and Human Services should examine the incentives for states and localities to pursue initiatives to increase administrative efficiencies in employment and training programs and, as warranted, identify options for increasing such incentives. Labor and HHS have initiatives underway, but it is too early to tell what remedies they will provide. In addition, the Administration has proposed to consolidate employment and training programs. And H.R. 803, the Supporting Knowledge and Investing in Lifelong Skills Act (SKILLS Act), which was passed by the House in March 2013, would streamline or eliminate multiple and training programs and consolidate the funding of a number of other programs into a Workforce Investment Fund. Domestic Food and Nutrition Assistance: In our 2011 annual report, we found that domestic food and nutrition assistance is provided through a decentralized system of primarily 18 different federal programs that shows signs of overlap and inefficient use of resources. We also found that some of these programs provide comparable benefits to similar or overlapping populations. However, not enough is known about the effectiveness of many of these programs. Research suggested that participation in 7 of the 18 programs is associated with positive health and nutrition outcomes consistent with programs’ goals; yet little is known about the effectiveness of the remaining 11 programs because they have not been well studied. As a result, we suggested that the U.S. Department of Agriculture should identify and develop methods for addressing potential inefficiencies and reducing unnecessary overlap among its smaller food assistance programs while ensuring that those who are eligible receive the assistance they need. Teacher Quality: In our 2011 annual report, we identified 82 distinct programs designed to help improve teacher quality, either as a primary purpose or as an allowable activity, administered across 10 federal agencies. While a mixture of programs can target services to underserved populations and yield strategic innovations, the current programs are not structured in a way that enables educators and policy makers to identify the most effective practices to replicate. According to Department of Education officials, it is typically not cost- effective to allocate the funds necessary to conduct rigorous evaluations of small programs; therefore, small programs are unlikely to be evaluated. As a result, we suggested that the Secretary of Education should work with other agencies as appropriate to develop a coordinated approach for routinely and systematically sharing information that can assist federal programs, states, and local providers in achieving efficient service delivery. Science, Technology, Engineering, and Mathematics Education: In our 2012 annual report, we found that in fiscal year 2010, 173 of the 209 (83 percent) Science, Technology, Engineering, and Mathematics Education (STEM) education programs administered by 13 federal agencies overlapped to some degree with at least 1 other program in that they offered similar services to similar target groups in similar STEM fields to achieve similar objectives. In addition to the fragmented and overlapping nature of federal STEM education programs, little is known about the effectiveness of these programs. Since 2005, when we first reported on this issue, we found that the majority of programs have not conducted comprehensive evaluations of how well their programs are working. Without an understanding of what is working in some programs, it will be difficult to develop a clear strategy for how to spend limited federal funds. Consequently, we suggested that the Director of the Office of Science and Technology Policy should direct the National Science and Technology Council to develop guidance to help agencies determine the types of evaluations that may be feasible and appropriate for different types of STEM education programs and develop a mechanism for sharing this information across agencies. The regular collection and review of performance information, both within and among federal agencies, could help executive branch agencies and Congress determine whether some of the federal programs or initiatives included in this series are making progress toward addressing the identified issues and could determine the actions that need to be taken to improve results. However, as we previously noted, our annual reports along with a large body of other work highlight several instances in which executive branch agencies do not collect necessary performance data. For example, in our 2011 annual report we noted that OMB has not used its budget and performance review processes to systematically review tax expenditures and promote integrated reviews of related tax and spending programs. Coordinated performance reviews of tax expenditures with related federal spending programs could help policymakers reduce overlap and inconsistencies and direct scarce resources to the most effective or least costly methods to deliver federal support. Similarly, we have previously reported that as Congress oversees federal programs and activities, it needs pertinent and reliable information to adequately assess agencies’ progress, ensure accountability, and understand how individual programs and activities fit within a broader portfolio of federal efforts. The lack of reliable performance data also makes it difficult for decision makers to determine how to address identified fragmentation, overlap, or duplication. In order for information from performance measurement initiatives to be useful to executive branch agencies and Congress in making decisions, garnering congressional support on what to measure and how to present this information is critical. Thus, the GPRA Modernization Act significantly enhances requirements for agencies to consult with Congress. Specifically, at least once every two years, OMB is required to consult with relevant committees with broad jurisdiction on crosscutting priority goals, while agencies must consult with their relevant appropriations, authorization, and oversight committees when developing or making adjustments to their strategic plans and agency priority goals. Last year we prepared a guide to help ensure that these consultations and the performance information produced by executive branch agencies are useful to Congress in carrying out its various decision-making responsibilities. Without this information, it will be difficult to know whether an agency’s goals reflect congressional input, and therefore if the goals will provide useful information for congressional decision making. Further, successful consultations can create a basic understanding among stakeholders of the competing demands that confront most agencies, the limited resources available to them, and how those demands and resources require careful and continuous balancing. This is important given Congress’s constitutional role in setting national priorities and allocating the resources to achieve them. Finally, to ensure that their performance information will be both useful and used by decision makers, agencies must consider the differing information needs of various users. The GPRA Modernization Act puts into place several requirements that could address users’ needs for completeness, accuracy, validity, timeliness, and ease of use. Requirements to include information about how various tools, such as program activities, regulations, and tax expenditures, contribute to goal achievement could lead to the development of performance information in areas that are currently incomplete. In addition, agencies are required to disclose more information about the accuracy and validity of their performance information in their performance plans and reports. While agencies will continue to report annually on progress towards the rest of their goals, the GPRA Modernization Act provides timelier, quarterly reporting for governmentwide and agency priority goals. By also requiring information to be posted on a governmentwide website, the act will make performance information more accessible and easy to use by stakeholders and the public. When issues span multiple organizations or multiple entities within an organization, improved management oversight is needed to avoid potential overlap and duplication. For example, we found that fragmented leadership and lack of a single authority in overseeing the acquisition of space systems have created challenges for optimally acquiring, developing, and deploying new space systems. This fragmentation is problematic not only because of a lack of coordination that has led to delays in fielding systems, but also because no one person or organization is held accountable for balancing governmentwide needs against wants, resolving conflicts, and ensuring coordination among the many organizations involved with space acquisitions, and ensuring that resources are directed where they are most needed. To help improve the coordination of space programs and reduce duplication, we suggest assessing whether a construct analogous to the Defense Space Council—which serves as the principal advisory forum to inform, coordinate, and resolve all DOD space issues—could be applied government wide or if a separate organization should be established that would, among other things, have responsibility for strategic planning. The GPRA Modernization Act seeks to improve agency management oversight by including a provision for quarterly performance reviews, modeled after effective data driven—or “Stat”—reviews being conducted at the local and state level. Specifically, agency leaders are required to conduct quarterly, data-driven reviews of their performance in achieving priority goals and identify strategies to improve performance where goals are not being met. As we recently reported, consistent with state and local experience, reviews can be a key tool for driving collaboration by including all key players from within or outside an agency that contribute to goal achievement. However, few agency Performance Improvement Officers reported they are using the reviews to coordinate or collaborate with other agencies that have similar goals, and agencies we reviewed cited concerns about involving outsiders. Nevertheless, our prior work has shown that agencies which participated in various planning and decision- making forums together reported that such interactions contributed to achieving their goals. For example, the Departments of Housing and Urban Development and Veterans Affairs—which both contribute to efforts to reduce veterans’ homelessness—have conducted several joint SFtat meetings, where they jointly analyze performance data to understand trends, identify best practices, and prioritize the actions needed to achieve veteran homelessness goals. Officials reported that these collaborative meetings have contributed to better outcomes. We recommended that the Director of OMB identify and share promising practices for including other relevant entities that contribute to achieving their agency performance goals. OMB agreed with our recommendation. When executive branch agencies carry out activities in a fragmented and uncoordinated way, the resulting patchwork of programs can waste scarce funds, confuse and frustrate program customers, and limit the overall effectiveness of the federal effort. Our 2013 annual report includes several areas in which improved interagency coordination and collaboration could help agencies better leverage limited resources or identify opportunities to operate more efficiently. For example, the Department of Veterans Affairs and DOD operate two of the nation’s largest health care systems, together providing health care to nearly 16 million veterans, service members, military retirees, and other beneficiaries at estimated costs for fiscal year 2013 of about $53 billion and $49 billion, respectively. As part of their health care efforts, the departments have established collaboration sites—locations where the two departments share health care resources through hundreds of agreements and projects—to deliver care jointly with the aim of improving access, quality, and cost-effectiveness of care. However, we found that the departments do not have a fully developed and formalized process for systematically identifying all opportunities for new or enhanced collaboration, potentially missing opportunities to improve health care access and quality, and reduce costs. The GPRA Modernization Act requires OMB to coordinate with executive branch agencies to establish crosscutting priority goals and to develop a federal government performance plan that defines the level of performance needed to achieve them. As we reported in May 2012, the President’s Fiscal Year 2013 Budget submission included the first list of 14 interim crosscutting priority goals. For each of the interim goals, as required by the act, OMB listed the agencies and programs that contribute to the goal in the federal government performance plan. However, based on our prior work, we identified additional agencies and programs that should be included. Accordingly, we recommended that OMB consider adding those additional contributors to the crosscutting priority goals. OMB concurred with this recommendation, and in December 2012, OMB updated information to the federal government performance plan, and added some of the additional agencies and programs we identified for select goals. The crosscutting approach required by the act will provide a much needed basis for more fully integrating a wide array of federal activities as well as a cohesive perspective on the long-term goals of the federal government that is focused on priority policy areas. It could also be a valuable tool for governmentwide reexamination of existing programs and for considering proposals for new programs. The act also requires agencies to describe how they are working with each other to achieve their strategic and performance goals, as well as any relevant crosscutting priority goals. Moreover, for each of its performance and priority goals, each agency must identify the organizations, programs, and other activities—both within and external to the agency—that contribute to the goal. These new requirements provide additional opportunities for collaboration across executive branch agencies. We have previously identified key practices that can help federal agencies enhance and sustain their collaborative efforts along with key features to consider as they implement collaborative mechanisms. Congress also has an important role to play—both in its legislative and oversight capacities—in improving the efficiency and effectiveness of government programs. Other legislative strategies are also available, such as realigning committee structures or using task forces, caucuses, or commissions to work to improve the efficiency and effectiveness of federal programs. Our 2013 annual report includes several areas where legislative action is needed. For example, as noted earlier, we found that when the U.S. Department of Agriculture’s Food Safety and Inspection Service begins the catfish inspection program as mandated in the Food, Conservation, and Energy Act of 2008, the program will duplicate work already conducted by the Food and Drug Administration and by the National Marine Fisheries Service. To avoid this duplication, we suggested that Congress repeal the provisions of the act that assigned U.S. Department of Agriculture responsibilities for examining and inspecting catfish and establishing a catfish inspection program. Taking this action, as the President’s Fiscal Year 2014 Budget and S. 632 and H.R. 1313 submission propose, could save taxpayers millions annually, according to Food Safety and Inspection Service estimates of the program’s cost.Similarly, our 2011 annual report found that, depending on the policy choices made, reducing or eliminating direct farm payments could result in savings ranging from $800 million over 10 years to $5 billion annually. We suggested that Congress consider a range of options and S. 10, introduced on January 22, 2013, would eliminate all direct farm payments starting in Crop Year 2014. We have also suggested that Congress consider taking legislative action to consolidate certain programs. For example, in 2011 we reported that the federal government’s efforts to improve teacher quality have led to the creation of 82 distinct programs—administered by 10 federal agencies— at the cost of over $4 billion in fiscal year 2009. In addition to fragmentation, we also found overlap in a number of these programs. Among other things, we suggested that Congress either eliminate programs that are too small to evaluate cost-effectively or combine programs serving similar target groups. Similarly, in 2012, we commented on the overlap that exists between the products offered and markets served by the Department of Housing and Urban Development and Agriculture’s Rural Housing Service. In light of this overlap, we recommended that Congress consider requiring that both departments to examine the benefits and costs of merging programs. Given the potential benefits and costs of consolidation, it is imperative that Congress and the executive branch have the information needed to help effectively evaluate consolidation proposals. At the request of the Task Force on Government Performance, last year GAO issued a report identifying key questions for agencies to consider when evaluating consolidation proposals.inform the Congress when it is considering such a proposal: Similarly, these questions could also help What are the goals of the consolidation? What opportunities will be addressed through the consolidation and what problems will be solved? What problems, if any, will be created? What will be the likely costs and benefits of the consolidation? Are sufficiently reliable data available to support a business-case analysis or cost-benefit analysis? How can the up-front costs associated with the consolidation be funded? Who are the consolidation stakeholders, and how will they be affected? How have the stakeholders been involved in the decision, and how have their views been considered? On balance, do stakeholders understand the rationale for consolidation? To what extent do plans show that change management practices will be used to implement the consolidation? Congress could also require executive branch agencies to conduct program evaluations that would assess how well federal programs are working and identify steps that are needed to improve them. These evaluations typically examine processes, outcomes, impacts, or the cost- effectiveness of federal programs. However, few executive branch agencies regularly conduct in-depth program evaluations to assess their programs’ impact or learn how to improve results. Such program evaluations can complement ongoing performance measurement but typically involve a more in-depth examination to learn the benefits of a program or how to improve it. GPRA requires agencies to describe the summary findings of any completed program evaluations in their performance reports. In addition, agencies are to describe how program evaluations informed establishing or revising goals in their strategic plans, along with a schedule for future program evaluations to be conducted. Congress can also encourage executive branch agencies to help improve the efficiency and effectiveness of federal programs through its oversight activities. For example, our past work has highlighted several instances in which Congress has used performance information in its decision making to (1) identify issues that the federal government should address, (2) measure progress towards addressing those issues, and (3) identify better strategies to address the issues, when necessary. Congressional use of similar information in its decision making for the identified areas of fragmentation, overlap, and duplication will send an unmistakable message to agencies that Congress considers these issues a priority. Such oversight can also highlight progress that agencies are making in addressing needed reforms. Congress recently highlighted the importance of addressing issues of fragmentation, overlap, and duplication through its oversight. For example, the Senate Budget Resolution for fiscal year 2014 directs committees to review programs and tax expenditures within their jurisdiction for waste, fraud, and duplication and to consider the findings from our past annual reports. Similarly, the House Budget Resolution for fiscal year 2014 describes some of our findings from our past annual reports, notes the number of programs that will need to be reauthorized in fiscal year 2014, and states that that our findings should result in programmatic changes in both authorizing statutes and program funding levels. The importance of active congressional oversight can be seen in improvements made to federal programs that were once included on our High Risk List.congressional oversight has helped maintain executive branch agencies’ attention in addressing the identified concerns and thus contributed to their removal from our High Risk List. For further information on this testimony or our 2013 annual report, please contact Orice Williams Brown, Managing Director, Financial Markets and Community Investment, who may be reached at (202) 512- 8678 or williamso@gao.gov, and A. Nicole Clowers, Director, Financial Markets and Community Investment, who may be reached at (202) 512- 8678 or clowersa@gao.gov. Contact points for the individual areas listed in our 2013 annual report can be found at the end of each area at http://www.gao.gov/products/GAO-13-279SP. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | As the fiscal pressures facing the nation continue, so too does the need for executive branch agencies and Congress to improve the efficiency and effectiveness of government programs and activities. Opportunities to take such action exist in areas where federal programs or activities are fragmented, overlapping, or duplicative. To highlight these challenges and to inform government decision makers on actions that could be taken to address them, GAO is statutorily required to identify and report annually to Congress on federal programs, agencies, offices, and initiatives, both within departments and governmentwide, that have duplicative goals or activities. GAO has also identified additional opportunities to achieve greater efficiency and effectiveness by means of cost savings or enhanced revenue collection. This statement discusses the (1) new areas identified in GAO's 2013 annual report; (2) status of actions taken by the administration and Congress to address the 131 areas identified in GAO's 2011 and 2012 annual reports; (3) President's April Fiscal Year 2014 Budget submission and recently introduced legislation; and (4) strategies that can help address the issues we identified. GAO's 3-year systematic examination included a review of the budget functions of the federal government representing nearly all of the overall federal funds obligated in fiscal year 2010. GAO's 2013 annual report identifies 31 new areas where agencies may be able to achieve greater efficiency or effectiveness. Seventeen areas involve fragmentation, overlap, or duplication. For example, GAO reported that the Department of Defense could realize up to $82 million in cost savings and ensure equivalent levels of performance and protection by taking action to address its fragmented approach to developing and acquiring combat uniforms. Additionally, GAO reported that a total of 31 federal departments and agencies collect, maintain, and use geospatial information. Better planning and implementation could help reduce duplicative investments and save of millions of dollars. The report also identifies 14 additional areas where opportunities exist to achieve cost savings or enhance revenue collections. For example, GAO suggested that Department of Health and Human Services cancel the Medicare Advantage Quality Bonus Payment Demonstration. GAO found most of the bonuses will be paid to plans with average performance and that the demonstration's design precludes a credible evaluation of its effectiveness. Canceling the demonstration for 2014 would save about $2 billion. GAO also noted opportunities to save billions more in areas such as expanding strategic sourcing, providing greater oversight for Medicaid supplemental payments, and reducing subsidies for crop insurance. Additionally, GAO pointed out opportunities for enhancing revenues by reducing the net tax gap of $385 billion, reviewing prices of radioactive isotopes sold by the government, and providing more equity in tobacco taxes for similar types of products. The executive branch and Congress have made some progress in addressing the areas that GAO identified in its 2011 and 2012 annual reports. Specifically, GAO identified approximately 300 actions among 131 overall areas that the executive branch and Congress could take to reduce or eliminate fragmentation, overlap, or duplication or achieve other potential financial benefits. As of March 6, 2013, the date GAO completed its progress update audit work, about 12 percent of the areas were addressed, 66 percent were partially addressed, and 21 percent were not addressed. More recently, both the administration and Congress have taken additional steps, including proposals in the President's April Fiscal Year 2014 Budget submission. Addressing fragmentation, overlap, and duplication will require continued attention by the executive branch agencies and targeted oversight by Congress. In many cases, executive branch agencies have the authority to address the actions that GAO identified. In other cases, such as those involving the elimination or consolidation of programs, Congress will need to take legislative action. Moreover, sustained congressional oversight will be needed in concert with the administration's efforts to address the identified actions by improving planning, measuring performance, and increasing collaboration. Effective implementation of the GPRA Modernization Act of 2010 also could help the executive branch and Congress as they work to address these issues over time. |
Illegal immigration is an important issue, especially in California, New York, Texas, Florida, Illinois, Arizona, and New Jersey—the states estimated to account for over three-fourths of the illegal alien population. Illegal aliens are a concern not only because they are breaking immigration laws but for various other reasons. For example, state and local governments are especially concerned about the effect on their budgets of providing benefits and services to illegal aliens. In addition, there are concerns about whether the presence of illegal alien workers has negative effects on the employment of U.S. workers. Public concern about the number of illegal aliens residing in the United States led to the passage of major immigration legislation in the 1980s. In an effort to reduce the size of the nation’s illegal alien population, estimated at 3 to 5 million in 1986, the Congress enacted the Immigration Reform and Control Act of 1986 (IRCA). IRCA attempted to deter the inflow of illegal aliens by prohibiting employers from hiring anyone not authorized to work. IRCA also provided that under certain circumstances, an illegal alien’s status could be adjusted to lawful permanent resident.Almost 3 million illegal aliens acquired lawful permanent residence as a result of IRCA. Despite a brief drop in the estimated number of illegal entries to the United States after IRCA was enacted, the inflow of illegal aliens has subsequently increased, so that the size of the illegal alien population is now estimated to have increased once more to pre-IRCA levels. INS estimated that there were 3.4 million illegal aliens residing in the country in October 1992. Updating this estimate would place the illegal alien population at about 4 million in 1994. The Bureau of the Census estimated that the size of the illegal alien population was between 3.5 million and 4 million in April 1994. Illegal aliens are not eligible for most federal benefit programs, including Supplemental Security Income, Aid to Families With Dependent Children (AFDC), Food Stamps, unemployment compensation, financial assistance for higher education, and the Job Training Partnership Act (JTPA).However, they may participate in certain benefit programs that do not require legal immigration status as a condition of eligibility, such as Head Start, the Special Supplemental Food Program for Women, Infants, and Children (WIC), and the school lunch program. In addition, they are eligible for emergency medical services, including childbirth services, under Medicaid if they meet the program’s conditions of eligibility. Illegal aliens may apply for AFDC and food stamps on behalf of their U.S. citizen children. Although it is the child and not the parent in such cases who qualifies for the programs, benefits help support the child’s family. Illegal aliens may not work in the United States or legally obtain Social Security numbers for work purposes. However, many illegal aliens do work and have Social Security taxes withheld from their wages based on falsely obtained numbers. Illegal aliens are not explicitly barred from receiving Social Security benefits; nonetheless, some illegal aliens may not be able to collect benefits because an individual generally must have obtained a valid Social Security number to receive credit for work performed. Illegal aliens generate revenues as well as costs; these revenues offset some of the costs that governments incur. Research studies indicate that many illegal aliens pay taxes, including federal and state income taxes; Social Security tax; and sales, gasoline, and property taxes. However, researchers disagree on the amount of revenues illegal aliens generate and the extent to which these revenues offset government costs for benefits and services. Over the past few years, the states with the largest illegal alien populations have developed estimates of the costs they incur in providing benefits and services to illegal aliens. These estimates vary considerably in the range of costs included and methodologies used. Two states, California and Texas, also have estimated the public revenues that illegal aliens generate. In a recent report, we reviewed California’s estimates of three costs for illegal aliens—elementary and secondary education, Medicaid, and adult incarceration—and various revenues from this population. Although we adjusted the cost estimates based on our assessment of the state’s assumptions, we cited several data limitations that prevented us from developing precise estimates. The even more extensive data limitations on the revenue side precluded us from making any assessment of the revenue estimates. The literature on the public fiscal impact of illegal aliens reflects considerable agreement among researchers that illegal aliens are a net cost, though the magnitude of the cost is a subject of continued debate. We identified 13 studies issued between 1984 and 1994 that developed estimates of the net costs of illegal aliens. Many of the studies focused on the illegal alien population in specific states, such as California or Texas, or specific areas, such as San Diego County or Los Angeles County. In addition, the range of costs and revenues included in the studies varied depending on the level of government examined: local, state, federal, or some combination of these. All but one study concluded that illegal aliens generated more in public costs than they contributed in revenues to government. (See app. I for a list of the studies.) Only 3 of the 13 studies estimated the fiscal impact of all illegal aliens in the United States on all levels of government. The three studies that have estimated the national net cost of illegal aliens have generated considerable media attention and public discussion. Each concluded that illegal aliens generate more in costs than revenues at the national level, but their estimates of the magnitude of the net cost varied considerably. The studies faced the difficult task of developing estimates of the public fiscal impact of a population on which little data are available. They generally relied on indirect approaches; as a result, the reasonableness of many of their assumptions are unknown. In addition, the studies differed considerably in the range of costs and revenues they included and their treatment of certain items, which makes them difficult to compare. For these reasons, a great deal of uncertainty remains about the actual national net cost of illegal aliens. Donald Huddle estimated that the national net cost of illegal aliens to federal, state, and local governments was $11.9 billion in 1992. This estimate was followed by an Urban Institute review of Huddle’s work, which adjusted some of Huddle’s cost and revenue estimates and estimated a much lower net cost for 1992—$1.9 billion. Responding to the Urban Institute’s criticisms, Huddle subsequently produced an updated estimate for 1993 that was higher than his initial estimate—$19.3 billion.(See app. II for a list of the costs and revenues included in each of the estimates.) The net cost estimates in each of the national studies are derived from three major components: (1) the direct costs of providing public benefits and services to illegal aliens, (2) displacement costs—the costs of providing various types of public assistance to U.S. citizens displaced from their jobs by illegal aliens, and (3) public revenues attributable to illegal aliens. A comparison of Huddle’s initial study with the Urban Institute’s study indicates that the major differences were in their estimates of displacement costs and revenues. Their estimates of direct program costs were relatively similar, as shown in figure 1. In their study, the Urban Institute researchers did not develop a completely independent estimate but instead adjusted some of the cost and revenue estimates in Huddle’s initial study to obtain what they believed to be a more reasonable estimate. The Urban Institute study also added certain revenues that were not included in Huddle’s initial study, such as payroll taxes (Social Security and unemployment compensation) and federal gasoline tax. In developing their own estimate, Urban Institute researchers used some of Huddle’s assumptions. In particular, the Urban Institute study used Huddle’s estimate of the size of the illegal alien population—4.8 million illegal aliens—for purposes of comparability, though the study maintained that this estimate was too high. Huddle’s update of his earlier study differs substantially from the Urban Institute study in all three components of the net cost estimates, with the largest difference occurring between the estimates of direct program costs (see fig. 1). The reason for this difference is primarily because Huddle’s updated study includes over $10 billion for direct cost items that were not included in either his initial study or the Urban Institute study. National data on illegal aliens’ use of public services and level of tax payments generally are not available. Various national databases that contain extensive data on the resident population’s use of public services and household characteristics, for example, do not have data on the immigration status of respondents who are not U.S. citizens. Questions about immigration status are not included on Census surveys because they might provoke untruthful responses and thereby affect the quality of the survey data, according to a Census official. Because of such data limitations, the national studies relied on indirect approaches to estimate the costs and revenues attributable to illegal aliens. In using these approaches, the studies made assumptions whose reasonableness is often unknown. To estimate direct program costs, for example, the studies multiplied their estimates of the average number of illegal aliens who received a benefit or service times the average annual program cost per illegal alien. However, data generally are not available to assess whether the assumptions used in estimating illegal aliens’ recipiency rates and average costs were reasonable. For example, for some programs, one or more of the studies assumed that illegal aliens had the same recipiency rate and average cost as the overall population served by the program. Huddle’s updated study made this assumption in estimating costs for Head Start and adult education. For other programs, the studies adjusted the national recipiency rate or average cost upward or downward to reflect a presumed difference in the use of the program by illegal aliens. For example, in estimating the cost of housing assistance, Huddle’s initial and updated studies assumed that the recipiency rate and average cost were higher for illegal aliens than for the overall population served by this program. The Urban Institute’s study assumed that the recipiency rate was higher but that the average cost was the same. For still other programs, the studies estimated the public service use of illegal aliens by using data on populations that included groups in addition to illegal aliens. For example, in their estimates of the cost of primary and secondary education, the studies used data on the school enrollment rates of populations that included foreign-born children who were legal residents. The studies’ estimates of the enrollment rate of school-age illegal aliens ranged from 70 to 86 percent. To estimate revenues attributable to illegal aliens, Huddle’s initial study and the Urban Institute’s study started with a preexisting estimate of revenues collected from illegal aliens in Los Angeles County for various federal, state, and local taxes. The studies calculated the per capita payments by illegal aliens in Los Angeles County for each of these taxes. The studies then used different methodologies to adjust these per capita tax estimates to apply them to the national illegal alien population. In contrast, Huddle’s updated study used a different approach to estimate revenues. The study developed an estimate of the income distribution of the national illegal alien population from data on the foreign-born population and on illegal aliens who were legalized under IRCA. Based on this income distribution, the study used data on the tax payments or tax rates associated with different levels of income for the general population to estimate revenues from illegal aliens. The national net cost studies vary considerably in the range of costs and revenues they included and their treatment of certain items, making the studies difficult to compare. The variation in the studies reflects an absence of clear standards for determining the items that are appropriate to include in such estimates. A consensus on standards has not yet emerged because the three national studies represent the initial efforts of researchers to develop estimates of the total public fiscal impact of the illegal alien population. Because the studies attempted to develop comprehensive estimates of the fiscal impact of a population, it is important to determine whether the items they included are appropriate. However, this is difficult to determine because the studies did not always clearly explain the rationale for including items that were excluded by other studies or treating items differently from the way they were treated by other studies. As a result, it is difficult to ascertain whether the large variations in the studies’ estimates for such items stem from their addressing different policy questions or from differing views about how to respond to the same question. A relatively small number of costs and revenues account for much of the variation in the estimates of the national net cost of illegal aliens. Some of these cost and revenue items were included in one study but not the others. In the case of other items, the studies differed considerably in the approaches or assumptions they used to develop their estimates. Our review focuses on differences between the Urban Institute’s study and Huddle’s updated study. Four areas account for about 88 percent of the difference between the studies’ estimates of total costs: (1) costs for citizen children of illegal aliens, (2) costs for the portion of some services provided to the general public that are used by illegal aliens, (3) Social Security costs, and (4) costs for workers displaced from jobs by illegal aliens. On the revenue side, about 95 percent of the difference in the studies’ estimates is attributable to differences in their estimates of local revenues (see table 1). Huddle’s initial study and the Urban Institute’s study included estimates of costs for U.S. citizen children of illegal aliens for only one program—AFDC. These costs represent cash payments received by illegal aliens on behalf of their citizen children. However, Huddle’s updated study includes estimates of citizen children costs for additional programs: primary and secondary education; school lunch; Food Stamps; and English as a Second Language, English for Speakers of Other Languages, and bilingual education. Huddle’s estimate of these additional items totals $3.9 billion. In all these programs except Food Stamps, the benefits or services are provided directly to citizen children. The appropriateness of including these additional citizen children costs depends on the policy question under consideration. For example, if the question concerns the overall public fiscal impact associated with illegal immigration, then including these costs would be appropriate because they are a consequence of the failure to prevent aliens from illegally entering and residing in the United States. In addition, it would also be appropriate to include costs and revenues attributable to adult citizen children of illegal aliens (children 18 years old and older). Alternatively, if the question concerns the cost of benefits or services provided only to persons residing unlawfully in the country, then it would not be appropriate to include these costs. None of the three national studies, however, clearly specifies the question it addressed. Huddle’s initial study and the Urban Institute’s study included estimates of costs for the portion of some county government services provided to the general public that are used by illegal aliens, such as public safety, fire protection, recreation, roads, and flood control. Huddle’s updated study includes over $5.3 billion in additional costs for miscellaneous public services not included in his initial study or the Urban Institute’s study, including federal and state highway costs and costs for a range of city services, such as police, fire, sewerage, libraries, parks and recreation, financial administration, and interest on debt. The studies’ inclusion of costs for services to the general public raises two issues: the specific services that should be included and the appropriate methodology for estimating the costs of the services attributable to illegal aliens. With regard to the first issue, the national studies focused on local services provided to the general public; the only such state or federal service that any of them included was highway services. However, because there are other state and federal services provided to the general public that illegal aliens may use or benefit from, it is not clear that the studies’ estimates included all the appropriate items. None of the studies clearly addressed this issue. A second issue involves the methodology used to estimate the costs of services provided to the general public. Huddle’s updated study calculates the costs of the additional miscellaneous public services on an average cost basis. However, this may yield questionable estimates because the additional cost that governments incur for these services due to the presence of each illegal alien could be substantially lower or higher than the average cost per person of providing the services. Using marginal cost—the cost of providing a service to one additional user—would better reflect the additional costs due to the presence of illegal aliens. For example, in areas where illegal aliens constitute a small percentage of the population, the marginal cost of providing them fire protection could be lower than the average cost. On the other hand, if the number of illegal aliens in an area necessitates the construction of new fire stations, the marginal cost of fire protection for them could be higher than the average cost. While using marginal costs would yield better estimates, the data needed to estimate these costs are difficult to obtain. Social Security (the Old Age, Survivors, and Disability Insurance program) has both a revenue side—payroll contributions from workers and employers—and a cost side—benefits paid out. Huddle’s initial study did not include either Social Security revenues or costs. Huddle’s updated study, in response to the Urban Institute’s study, included both. On the revenue side, the researchers’ estimates are fairly close: Huddle estimates $2.4 billion in Social Security revenues, compared with the Urban Institute’s estimate of $2.7 billion. However, on the cost side, the researchers draw sharply different conclusions: Huddle estimates that illegal aliens generated $3.3 billion in Social Security costs; the Urban Institute estimates that no Social Security costs were generated by illegal aliens. This difference reflects a disagreement about the conceptual approach to measuring Social Security costs. The Urban Institute study views the Social Security costs for illegal aliens in a given year as the amount of benefits paid to this population in that year. The rationale for this view is that the federal government treats Social Security costs and revenues on a current accounts basis: in calculating the annual federal budget deficit (or surplus), Social Security taxes are treated as revenues and Social Security benefits as expenses. However, the Social Security Administration does not have data on the amount of Social Security benefits paid to illegal aliens; as a result, it is unclear whether the Urban Institute’s assumption that this amount was zero is reasonable. In contrast, Huddle’s updated study views Social Security costs in terms of the “present value of future benefits” that illegal aliens will collect. The study’s cost estimate for 1993 represents the present value of the portion of future Social Security benefits that illegal aliens will receive that is attributable to their earnings in 1993. Huddle’s rationale for using this approach to Social Security costs is the belief that the federal government is incurring a substantial obligation for future benefits to illegal aliens. However, the data needed to develop a reasonable estimate of the amount of Social Security benefits that illegal aliens will collect in the future are not available. These different conceptual approaches to measuring Social Security costs appear to address different questions. The current accounts approach is relevant to the question of the current-year cost of benefits provided to illegal aliens who generally have reached retirement age. In contrast, the present value approach is more appropriate for answering the question of the long-term costs that will result from the presence of illegal aliens currently in the labor force. The explanation of the Social Security cost estimate in Huddle’s updated study makes it difficult to discern whether he explicitly sought to address a different question than the one addressed by the Urban Institute’s study. Although illegal aliens by law are not entitled to work in this country, they often find employment. This raises questions about the extent to which illegal aliens take jobs away from legal residents—U.S. citizens and aliens residing legally in the country. Job displacement can generate costs to all levels of government for various forms of public assistance provided to legal residents who lose their jobs. Huddle’s initial and updated studies include $4.3 billion in costs for public assistance—Medicaid, AFDC, Food Stamps, unemployment compensation, and general assistance—provided to displaced U.S. citizen workers. In contrast, the Urban Institute’s study concludes that any job displacement costs are offset by the positive economic effects of illegal aliens. These positive economic effects include the new jobs and additional spending (the multiplier effect) generated by illegal aliens’ spending on goods and services. Huddle’s subsequent response to the Urban Institute’s position is that the social and economic costs associated with each of the claimed economic benefits would have to be assessed. It is very difficult to quantify the positive and negative effects of illegal aliens on the economy. With regard to job displacement, our analysis indicates that Huddle’s $4.3 billion estimate is based on a job displacement rate that is inconsistent with research findings on this topic. While some studies have shown that job displacement may occur, recent studies using national data generally have concluded that displacement is either small in magnitude or nonexistent. Huddle’s estimate assumes a displacement rate of 25 percent; that is, for every 100 low-skilled illegal alien workers, 25 U.S. citizens were displaced from their jobs in 1993. The estimate cites Huddle’s own studies on job displacement to support the 25-percent rate. However, these studies assume a correlation between the employment of illegal aliens and the unemployment of native workers that is not supported by any evidence. (See app. III for a more complete discussion of Huddle’s displacement cost estimate.) With regard to positive economic effects, economic models have been developed to estimate multiplier effects; however, the models have not been used to measure the effects of subpopulations such as illegal aliens. As a result, the extent to which the positive economic effects of illegal aliens offset the costs they generate is unclear. The national net cost studies estimated the amounts of various revenues from illegal aliens collected by federal, state, and local governments. These include income, sales, property, Social Security, and gasoline taxes. (See app. II for a list of the revenues included in the studies.) Developing reasonable estimates of these revenues requires information about various characteristics of the illegal alien population, such as its size, age distribution, income distribution, labor force participation rate, consumption patterns, and tax compliance rate. However, limited data are available on these characteristics. Furthermore, the studies differ in some of the revenues they include. Huddle’s initial estimate of the total revenues from illegal aliens was $2.5 billion. The Urban Institute’s study criticized Huddle’s estimate for omitting several revenues—the largest being Social Security tax—and estimated $7 billion in total revenues. Huddle’s updated study, which estimated total revenues at $10 billion, added several revenues that were not included in his initial study, such as Social Security tax, federal and state gasoline taxes, and city taxes. As shown in table 2, the major area of difference between the revenue estimates in the Urban Institute’s study and Huddle’s updated study was in their estimates of local revenues. Two factors help explain the difference in their estimates of local revenues. First, Huddle’s updated study includes some local revenues not included in the Urban Institute’s study, such as property taxes paid by businesses. Second, the researchers’ estimates of the per capita income of illegal aliens differ. The researchers use income as a factor in estimating the different revenues because the amount of revenues from illegal aliens is a function of their income levels. The per capita income figure in Huddle’s updated study ($7,013) is 36 percent higher than that in the Urban Institute’s study ($5,155). However, more recent work by the Urban Institute for the same general time period can be used to obtain an income figure closer to Huddle’s—about $7,739. If this higher figure was substituted in the Urban Institute’s study, the estimate of total revenues from illegal aliens would increase to $10.5 billion, placing it closer to the $10 billion figure in Huddle’s updated study. The reasonableness of the revenue estimates would remain unclear even if the gap between the estimates was narrowed, due to the limited data available on the characteristics of the illegal alien population. For example, the estimates of illegal aliens’ incomes cited above are derived from two main sources: survey data on former illegal aliens who were legalized under IRCA and 1990 Census data on the foreign-born population (which does not distinguish illegal from legal aliens). By using these sources to develop estimates, the researchers assumed that the average income of illegal aliens was similar to that of aliens legalized under IRCA or to the foreign-born population (either to the population overall or subpopulations from specific countries). However, the reasonableness of these assumptions is unknown. Our review of the national net cost studies highlighted two key issues: the limited data on the illegal alien population and the considerable variation in both the items that the studies included and their treatment of some of the same items. These issues led us to conclude that considerable uncertainty remains about the national fiscal impact of illegal aliens. Obtaining better data on the illegal alien population and providing clearer explanations of which costs and revenues are appropriate to include would help improve the usefulness of the national estimates. The limited availability of data on illegal aliens is likely to remain a persistent problem because persons residing in the country illegally have an incentive to keep their status hidden from government officials. Yet as researchers explore new possibilities for overcoming some of the obstacles to collecting data on this population, some progress may be achieved. Given the data gaps in so many areas, any effort to collect better data should focus on those data that would have the greatest impact in improving the estimates of net costs. Thus, emphasis could be placed on obtaining data on illegal aliens’ use of those public benefits associated with the largest cost items or their payment of those taxes associated with the largest revenue items. For example, elementary and secondary education is estimated to be the single largest program cost; thus, researchers could focus on obtaining data on the number of illegal alien schoolchildren. However, researchers may confront legal barriers in attempting to collect these data. Another approach, which could be used in conjunction with the first, would be to obtain data on characteristics of the illegal alien population that would have broad usefulness by permitting researchers to estimate several cost or revenue items. For example, data on the illegal alien population’s size, geographic distribution, age distribution, income distribution, labor force participation rate, and tax compliance rate would be useful in estimating many types of revenues. Better data on the size of the population also would be useful in estimating most of the public costs of illegal aliens. Obtaining better data on the illegal alien population will not resolve all the problems associated with estimating the net costs of illegal aliens. Researchers will still face issues about which items are appropriate to include in the estimates and how the items should be treated. As we have seen, different decisions on these issues can generate considerable variation in estimates of net costs. Researchers need to clearly explain how they handled such issues in order to facilitate comparisons of their estimates. For example, when the decision about whether an item should be included or how it should be treated depends on the policy question being asked, a study should clearly acknowledge the question it addresses. The variations in the national studies’ treatment of costs for citizen children of illegal aliens and Social Security costs were difficult to assess because the studies did not make clear which questions they were addressing. Recognizing the need for better information on the effects of immigration, a federal effort is under way to improve estimates of the fiscal impact of legal and illegal aliens. The U.S. Commission on Immigration Reform, a bipartisan congressional commission created by the Immigration Act of 1990, is working on a final report to the Congress, due in 1997, on a wide range of immigration issues. The Commission provided an interim report to the Congress in September 1994. The Commission has convened a panel of independent experts organized by the National Academy of Sciences to review the methodologies and assumptions of studies of the costs and benefits of immigration. The panel will develop recommendations on the data sources and methodologies that hold the greatest promise for more precise measurement of the economic and social impacts of legal and illegal immigration. The three national studies that we reviewed represent the initial efforts of researchers to develop estimates of the total public fiscal impact of the illegal alien population. The little data available on this population make it difficult to develop reasonable estimates on a subject so broad in scope. Moreover, the national studies varied considerably in the range of items they included and their treatment of certain items, making their estimates difficult to compare. As a result, a great deal of uncertainty remains about the national fiscal impact of illegal aliens. Obtaining better data on the illegal alien population would help improve the national net cost estimates. Recognizing the difficulties inherent in collecting better data on a population with an incentive to keep its status hidden from government officials, any effort to collect better data should focus on those characteristics of the illegal alien population that are useful in estimating the largest net cost items, or many of them. These characteristics include the population’s size, geographic distribution, age distribution, income distribution, labor force participation rate, tax compliance rate, and extent of school participation. Clearer explanations of which costs and revenues are appropriate to include would also help improve the usefulness of the estimates. The appropriateness of including any particular item may depend on the policy questions addressed by a study. If studies were more explicit about the questions they address, their estimates of net costs would be easier to compare. The expert panel convened by the U.S. Commission on Immigration Reform could serve as a forum for discussing some of these data and conceptual issues. By exploring ways to provide lawmakers with better information on the public fiscal impact of illegal aliens, researchers could help provide a basis for the development of appropriate policy responses to address the problems of illegal immigration. We obtained comments on a draft of this report from the Urban Institute and Donald Huddle (see apps. V and VI). In their comments, the researchers restated their disagreements with each other on a number of topics, including the size of the illegal alien population, the appropriate treatment of costs for citizen children of illegal aliens and Social Security costs, and the magnitude of indirect costs such as those attributable to job displacement. The researchers also cited areas in which they maintained that our report did not sufficiently identify the problems with each other’s estimates. In addition, they provided technical comments that we incorporated where appropriate to better characterize the methodologies they used in their net cost estimates. The Urban Institute researchers agreed with much of the report’s analysis and its conclusions about the need for better data on the illegal alien population and sharper definitions of the accounting framework used. However, they took exception with two points in our report. They maintained that it is possible to test the reasonableness of the underlying assumptions used in the net cost estimates by developing estimates for reference groups and that their estimate of Social Security costs attributable to illegal aliens was reasonable. Huddle disagreed with several of the report’s findings. He maintained that the report was too negative in claiming that the reasonableness of many of the assumptions in the net cost estimates is unknown. In elaborating this point, Huddle argued that the results of various surveys of illegal aliens’ use of public benefits are consistent with the utilization rates in his cost estimates. Huddle also asserted that our report’s criticism of his Social Security and displacement cost estimates were unjustified. We believe that our report accurately describes the problems researchers face in developing estimates of the national fiscal impact of the illegal alien population. With regard to the reasonableness of the assumptions in the net cost estimates, we agree with Urban Institute researchers that developing cost and revenue estimates for reference groups can provide a “reality check” on estimates for illegal aliens, as well as a useful context for assessing the net cost estimates. However, the use of reference groups provides only a limited test and does not ensure that the estimates for a particular immigrant group are reasonable. We find Huddle’s claim that the assumptions in his estimates are consistent with the results of survey studies problematic for several reasons. The utilization rates reported by these studies vary considerably, the reliability of some of the studies has been questioned, and the extent to which the findings of these studies can be generalized to the illegal alien population nationwide is unclear. On the issue of Social Security costs for illegal aliens, we continue to believe that data limitations preclude the development of a reasonable estimate. To support their estimate that these costs are zero, the Urban Institute researchers cited some reasons why illegal aliens are not likely to be receiving Social Security benefits. Huddle, on the other hand, criticized the Urban Institute’s estimate by citing several reasons for believing that illegal aliens are receiving benefits. Given the researchers’ disagreement and the lack of national data on the number of illegal aliens receiving benefits, we have no basis for supporting either of these positions. Data limitations also lead us to question Huddle’s estimate of Social Security costs. For example, Huddle claimed that at least 75 percent of illegal aliens in the work force have valid Social Security numbers, but he did not provide sufficient evidence to support this claim. Moreover, data are not available to assess his claim. Finally, with regard to the magnitude of displacement costs, we continue to believe that Huddle’s estimate overstates these costs because it is based on a displacement rate that is inconsistent with research findings on job displacement. (See pp. 32-33 for a more detailed discussion of Huddle’s comments and our responses on this issue). The comments from the Urban Institute and Huddle reinforce our assessment of how difficult it is to develop estimates of the national fiscal impact of illegal aliens, given the limited data available. As noted in this report, obtaining better data on some of the key characteristics of the illegal alien population could help narrow the gap between the researchers’ widely varying estimates of the national net cost. Moreover, clearer explanations of the approaches used would make the net cost estimates more useful. Our work was conducted in accordance with generally accepted government auditing standards. If you or your staff have any questions concerning this report, please call me on (202) 512-7215. Other GAO contacts and staff acknowledgments are listed in appendix VII. Huddle (1994) Federal, state, and local ($19 billion) Passel and Clark (Urban Institute) (1994) Federal, state, and local ($2 billion) Huddle (1993) Federal, state, and local ($12 billion) Huddle (1994) Federal, state, and local ($913 million) Huddle (1994) Federal, state, and local ($1 billion) Huddle (1993) Federal, state, and local ($5 billion) Parker and Rea (1993) San Diego County, fiscal year 1992-93 ($244 million) Parker and Rea (1992) San Diego County, fiscal year 1991-92 ($146 million) Texas Governor’s Office of Immigration and Refugee Affairs (1993) ($130-$166 million) Romero and others (1994) California, fiscal year 1994-95 ($2.7 billion) Los Angeles County Board of Supervisors (1992) Los Angeles County, fiscal year 1991-92 ($272 million) Los Angeles County Chief Administrative Office (1991) Los Angeles County, fiscal year 1990-91 ($276 million) Lyndon B. Johnson School of Public Affairs (1984) Six Texas cities, fiscal year 1982 ($4-$30 million) Huddle’s initial estimate (1992) Urban Institute’s estimate (1992) Huddle’s updated estimate (1993) Primary and secondary education (citizen children) School lunch (citizen children) English as a Second Language, English for Speakers of Other Languages, and bilingual education English as a Second Language, English for Speakers of Other Languages, and bilingual education (citizen children) Criminal justice (corrections) Earned Income Tax Credit and health care tax credit State and federal highway costs (continued) Huddle’s initial estimate (1992) Urban Institute’s estimate (1992) Huddle’s updated estimate (1993) Net costs (costs less revenues) Donald Huddle, The Costs of Immigration (Washington, D.C.: 1993), exhibits 5, 6, and 12. Jeffrey S. Passel and Rebecca L. Clark, How Much Do Immigrants Really Cost? A Reappraisal of Huddle’s “The Cost of Immigrants” (Washington, D.C.: 1994), pp. 1-8, supplemented by data from Jeffrey Passel providing a breakdown of the cost estimates for individual items; and Jeffrey S. Passel, Immigrants and Taxes: A Reappraisal of Huddle’s “The Cost of Immigrants” (Washington, D.C.: 1994), table 7c. Donald Huddle, The Net National Costs of Immigration in 1993 (Washington, D.C.: 1994), exhibits 5, 6, and 12. The estimate does not include this item. In our view, Huddle’s estimate of $4.3 billion in displacement costs is based on a displacement rate that is too high. The estimate assumes that for every 100 low-skilled illegal alien workers, 25 U.S. citizens were displaced from their jobs in 1993. This assumption of a 25-percent displacement rate is inconsistent with research findings on job displacement. Huddle’s study cites his own work on job displacement to support the claim that the level of displacement is at least 25 percent. In several field surveys that focused on the labor market in the Houston metropolitan area, Huddle claimed to have found displacement rates that ranged from 23 to 53 percent in the 1980s. The figures that Huddle cited in his 1982-83, 1985, and 1989-90 “microstudies of job displacement” are based on the percentages of unemployed native workers he surveyed who were still unemployed after some period of time. However, these figures cannot be construed as measures of displacement by illegal aliens because the studies did not show that the unemployed natives lost their jobs to illegal aliens or were unable to find work because of the presence of illegal aliens in the Houston labor market. In effect, Huddle’s microstudies of job displacement assumed a correlation between the employment of illegal aliens and the unemployment of native workers that was unsupported by any evidence. In addition, even if the studies had accurately measured the level of job displacement in Houston in the 1980s, the phenomenon of job displacement is so sensitive to the locality where it is measured that the studies’ results for Texas cannot be generalized to the nation. In his national net cost study, Huddle maintains that the 25-percent displacement rate is a conservative figure because an even higher displacement rate can be derived from a study by Altonji and Card.However, this contradicts the conclusion that the authors draw from their own research. Altonji and Card summarize the results of their study as indicating that immigrants have a small and potentially zero effect on the employment rates of natives. Furthermore, Huddle’s interpretation of Altonji and Card’s econometric results is based on an incorrect use of statistics. Huddle sums the coefficients from three separate regression equations, each with a different dependent variable. The work of other researchers does not support the claim of a 25-percent displacement rate. Our 1986 review of the literature on job displacement concluded that illegal aliens may displace native workers. However, it found that the available research was inconclusive because it was limited and suffered from important methodological weaknesses. In addition, the experts that we consulted during our review agreed that while there is no consensus on what the average displacement rate might be, the literature on displacement does not support the claim of a rate as high as 25 percent. Recent studies using nationwide data have concluded that job displacement by aliens is either small in magnitude or nonexistent. The literature on job displacement that focuses specifically on illegal aliens has reached the same conclusion. In his comments on a draft of our report, Huddle maintained that our criticism of his displacement cost estimate was unjustified (see app. VI). Huddle made four main points about our discussion of displacement. First, he contended that we had misunderstood his definition of displacement and were not including other types of displaced workers, such as teenagers who could not find first-time jobs and workers who had to physically move in order to look for work. Second, Huddle maintained that the coefficients from the four different equations in the Altonji and Card study are additive. Third, Huddle claimed that we did not consider the effect of illegal immigrants on wage depression as well as job displacement. Finally, Huddle maintained that his interpretation of the literature on job displacement was valid and that other experts would agree with him. With respect to Huddle’s definition of displacement, we do not agree that it is valid to apply this broader definition in calculating the costs of the array of social service benefits he cites. Workers who have never entered the labor force cannot collect unemployment benefits, for example, and teenagers in particular are not likely to be individually eligible for the full range of welfare benefits. Workers who migrate elsewhere, that is, those who are physically displaced due to the presence of illegal aliens in the work force, may not necessarily be jobless or earning such a low wage in their new place of residence that they would be eligible for welfare benefits. Most importantly, there is no evidence of how many displaced workers remain permanently unemployed and, therefore, continue to collect welfare over a long period of time. In our view, ascribing full costs to this broader set of workers overstates the true cost of displacement. With respect to Huddle’s claim that the coefficients in table 7.7 of the Altonji and Card study are additive, we disagree. Adding the coefficients on the first equation, which measures the ratio of people in the labor force to the population as a whole, and the second equation, which measures the ratio of employed persons to the population as a whole, effectively double-counts all employed persons, because the second ratio is a subset of the first. In addition, no other researcher we consulted, including one of the authors, interpreted the Altonji and Card study in the way that Huddle did, nor did they agree with Huddle’s methodology of adding coefficients from separate regression equations to get a measure of total labor displacement. With respect to Huddle’s claim that we overlooked the phenomenon of wage depression, we did not make an evaluation of the impact of illegal aliens on wage depression because that was outside the scope of the net cost studies we reviewed. These studies specified job displacement only, and it is our judgment that the evidence on job displacement is much weaker than the evidence on wage depression. Huddle’s claim that job displacement and wage depression are close substitutes in terms of their impact on the low-skill native work force and on the net cost of public services is not supported by any empirical evidence or reference to any relevant literature. Finally, with respect to our overall conclusion and our interpretation of the literature, we thoroughly reviewed the literature and consulted with recognized experts on immigration (see app. IV for a list of these persons). None of the experts we consulted believes that a displacement rate as high as 25 percent is supported by the research literature. George J. Borjas, Professor of Economics, University of California, San Diego. David Card, Professor of Economics, Princeton University. Richard Fry, Division of Immigration Policy and Research, Bureau of International Labor Affairs, U.S. Department of Labor, Washington, D.C. Briant Lindsay Lowell, Division of Immigration Policy and Research, Bureau of International Labor Affairs, U.S. Department of Labor, Washington, D.C. Demetrios Papademetriou, Carnegie Endowment for International Peace, Washington, D.C. Stephen J. Trejo, Associate Professor of Economics, University of California, Santa Barbara. Sidney Weintraub, Center for Strategic and International Studies, Washington, D.C.; Dean Rusk Chair in International Affairs, Lyndon B. Johnson School of Public Affairs, University of Texas, Austin. In addition to those named above, the following persons also made important contributions to this report: Deborah A. Moberly, Evaluator; Alicia Puente Cackley, Senior Economist; Steven R. Machlin, Senior Social Science Analyst; and William McNaught, Assistant Director, Office of the Chief Economist. Altonji, Joseph G., and David Card. “The Effects of Immigration on the Labor Market Outcomes of Less-skilled Natives.” Immigration, Trade and the Labor Market, John Abowd and Richard B. Freeman, eds. Chicago: University of Chicago Press, 1991. Bean, Frank D., and others. “Undocumented Migration to the United States: Perceptions and Evidence.” Population and Development Review, Vol. 13, No. 4 (1987), pp. 671-90. Carrying Capacity Network. A Critique of the Urban Institute’s Claims of Cost Free Immigration: Huddle Findings Confirmed. Washington, D.C.: 1994. Clark, Rebecca L. The Costs of Providing Public Assistance and Education to Immigrants, PRIP-UI-34. Washington, D.C.: The Urban Institute, 1994. Clark, Rebecca L., and others. Fiscal Impacts of Undocumented Aliens: Selected Estimates for Seven States. Washington, D.C.: The Urban Institute, 1994. Enchautegui, Maria E. “Effects of Immigration on Wages and Joblessness: Evidence from Thirty Demographic Groups.” Washington, D.C.: The Urban Institute, 1994. Fernandez, Edward W., and J. Gregory Robinson. “Illustrative Ranges of the Distribution of Undocumented Immigrants by State,” technical working paper no. 8. Washington, D.C.: U.S. Bureau of the Census, Population Division, 1994. Fix, Michael, and Jeffrey S. Passel. Immigration and Immigrants: Setting the Record Straight. Washington, D.C.: The Urban Institute, 1994. Greenwood, Michael J., and Gary L. Hunt. “Economic Effects of Immigrants on Native and Foreign-Born Workers: Complementarity, Substitutability, and Other Channels of Influence.” Washington D.C.: U.S. Department of Labor, Bureau of International Labor Affairs, Division of Immigration Policy and Research, 1991. Greenwood, Michael J., and John McDowell. “The Labor Market Consequences of U.S. Immigration: A Survey,” Working Paper 1, 1990. Washington, D.C.: U.S. Department of Labor, Bureau of International Labor Affairs, Division of Immigration Policy and Research. Huddle, Donald. The Net National Costs of Immigration Into the United States: Illegal Immigration Assessed. Washington, D.C.: Carrying Capacity Network, 1995. _____. The Net Costs of Immigration to Florida. Washington, D.C.: Carrying Capacity Network, 1994. _____. The Net National Costs of Immigration in 1993. Washington, D.C.: Carrying Capacity Network, 1994. _____. The Net Costs of Immigration to Texas. Washington, D.C.: Carrying Capacity Network, 1994. _____. The Costs of Immigration. Washington, D.C.: Carrying Capacity Network, 1993. _____. The Net Costs of Immigration to California. Washington, D.C.: Carrying Capacity Network, 1993. _____. “Immigration and Jobs: The Process of Displacement.” The NPG Forum (May 1992), pp. 1-5. Los Angeles County Chief Administrative Office. Updated Revenues and Costs Attributable to Undocumented Aliens. Los Angeles: 1991. Los Angeles County Internal Services Department. Impact of Undocumented Persons and Other Immigrants on Costs, Revenues and Services in Los Angeles County. Report prepared for Los Angeles County Board of Supervisors, Nov. 6, 1992. Lyndon B. Johnson School of Public Affairs. The Use of Public Services by Undocumented Aliens in Texas: A Study of State Costs and Revenues, Policy Research Report, No. 60. Austin, Texas: Lyndon B. Johnson School of Public Affairs, University of Texas, 1984. Parker, Richard A., and Louis M. Rea. Illegal Immigration in San Diego County: An Analysis of Costs and Revenues, report to the California State Senate Special Committee on Border Issues. San Diego: 1993. Passel, Jeffrey S. Immigrants and Taxes: A Reappraisal of Huddle’s “The Cost of Immigrants.” Washington, D.C.: The Urban Institute, 1994. Passel, Jeffrey S., and Rebecca L. Clark. How Much Do Immigrants Really Cost? A Reappraisal of Huddle’s “The Cost of Immigrants.” Washington, D.C.: The Urban Institute, 1994. Rea, Louis M., and Richard A. Parker. A Fiscal Impact Analysis of Undocumented Immigrants Residing in San Diego County, report by the Auditor General of California, C-126. Sacramento, California: 1992. Romero, Phillip J., and others. Shifting the Costs of a Failed Federal Policy: The Net Fiscal Impact of Illegal Immigrants in California. Sacramento, Calif.: California Governor’s Office of Planning and Research, and California Department of Finance, 1994. Taylor, Lowell J., and others. “Mexican Immigrants and the Wages and Unemployment Experience of Native Workers,” Policy Discussion Paper PRIP-UI-1, Program for Research on Immigration Policy. Washington, D.C.: The Urban Institute, 1988. Texas Governor’s Office of Immigration and Refugee Affairs. Estimated Costs for the Undocumented Population. Austin, Texas: 1993. U.S. Commission on Immigration Reform. U.S. Immigration Policy: Restoring Credibility. Washington, D.C.: U.S. Government Printing Office, 1994. U.S. General Accounting Office. Illegal Aliens: Assessing Estimates of Financial Burden on California (GAO/HEHS-95-22). Washington, D.C.: 1994. U.S. General Accounting Office. Benefits for Illegal Aliens: Some Program Costs Increasing, But Total Costs Unknown (GAO/T-HRD-93-33). Washington, D.C.: 1993. U.S. General Accounting Office. Illegal Aliens: Limited Research Suggests Illegal Aliens May Displace Native Workers (GAO/PEMD-86-9BR). Washington, D.C.: 1986. Vernez, Georges, and Kevin McCarthy. The Fiscal Costs of Immigration: Analytical and Policy Issues, DRU-958-1-IF, background paper presented at “The Public Costs of Immigration: Why Does It Matter?” Rand, Center for Research on Immigration Policy, Santa Monica, California, 1995. Warren, Robert. “Estimates of the Unauthorized Immigrant Population Residing in the United States, by Country of Origin and State of Residence: October 1992.” Unpublished report, U.S. Immigration and Naturalization Service. Washington, D.C.: 1994. Winegarden, C.R., and Lay Boon Khor. “Undocumented Immigration and Unemployment of U.S. Youth and Minority Workers: Econometric Evidence.” The Review of Economics and Statistics, Vol. 73, No. 1 (1991), pp. 105-112. Illegal Aliens: Assessing Estimates of Financial Burden on California (GAO/HEHS-95-22, Nov. 28, 1994). Benefits for Illegal Aliens: Some Program Costs Increasing, But Total Costs Unknown (GAO/T-HRD-93-33, Sept. 29, 1993). Illegal Aliens: Despite Data Limitations, Current Methods Provide Better Population Estimates (GAO/PEMD-93-25, Aug. 5, 1993). Trauma Care Reimbursement: Poor Understanding of Losses and Coverage for Undocumented Aliens (GAO/PEMD-93-1, Oct. 15, 1992). Undocumented Aliens: Estimating the Cost of Their Uncompensated Hospital Care (GAO/PEMD-87-24BR, Sept. 16, 1987). Illegal Aliens: Limited Research Suggests Illegal Aliens May Displace Native Workers (GAO/PEMD-86-98BR, Apr. 21, 1986). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO examined the costs of providing benefits and services to illegal aliens, focusing on: (1) current estimates of the national net costs of illegal aliens to all levels of government; (2) the variation in these estimates; and (3) areas in which the estimates could be improved. GAO found that: (1) illegal aliens in the United States generate more in costs than revenues to federal, state, and local governments combined; (2) estimates of the national net cost of illegal aliens vary greatly, ranging from $2 billion to $19 billion; (3) a great deal of uncertainty remains about the national fiscal impact of illegal aliens, because little data exists on illegal aliens' use of public services and tax payments; (4) displacement costs and revenue estimates account for much of the variation in the estimates of the national net costs of illegal aliens; (5) the estimates are difficult to assess because the studies do not always clearly explain the criteria used to determine which costs and revenues are appropriate to include in the estimates; and (6) the cost estimates could be improved by recognizing the difficulties inherent in collecting data on a hidden population, focusing on key characteristics of illegal aliens, and explaining more clearly which costs and revenues are appropriate to include in such estimates. |
Grants to state and local governments have historically been classified as either categorical grants or block grants. In terms of this historic classification, the typical categorical grant permits funds to be used only for specific, narrowly defined purposes and populations and includes administrative and reporting requirements that help to ensure both financial and programmatic accountability. These features, on the one hand, can make it easier for Congress to ascertain how funds have been used, and with what result. On the other hand, a grant system comprising numerous and overlapping specific programs, each with its own target populations and requirements, can create difficulties at the service delivery level. The combined coverage of related specific programs may be poorly matched to local needs, and differing eligibility and reporting requirements complicate program administration for service providers who receive funds from multiple grants. The block grant approach can avoid these disadvantages. In principle, block grants award funds to state or local governments, to be used at their discretion to support a range of activities aimed at achieving a broad national purpose. Consistent with their historic aim of devolving federal program responsibilities to, or supporting programs at, the state or local level, the block grants of the past (such as those of the 1980s) had limited administrative and reporting requirements. These features avoid many of the rigidities and burdens associated with multiple categorical grants. However, as our past reports have observed, these features also make it difficult for federal policymakers to ascertain how funds are being used and to verify that programs are achieving their intended purpose. In practice, the “categorical” and “block” grant labels and their underlying definitions represent the ends of a continuum and overlap considerably in its middle range. Some block grants have from their inception covered only a single major activity, and thus offer flexibility within a narrow range. The addition of constraints over the years has moved others toward the categorical end of the spectrum. Conversely, some initially categorical grants (such as Special Programs for Aging—Supportive Services and Senior Centers) have broadened and increased local flexibility over time and now look much like block grants. We use the term “flexible programs” to include all programs, however labeled, whose features put them in the block grant range. The Results Act embodies the current interest in holding federal agencies accountable for program performance. It requires each federal agency to develop a multiyear strategic plan that (1) states the agency’s mission; (2) identifies long-term strategic goals for each major function or operation; (3) describes how the agency intends to achieve those goals; (4) shows how annual performance goals relate to strategic goals; (5) identifies key factors beyond the agency’s control that could affect achievement of strategic goals; and (6) describes how program evaluations informed the plan and provides a schedule of future program evaluations. Agency strategic plans are the starting point for agencies to set annual goals for programs and to measure the performance of programs in achieving those goals. Program goals and performance measures covering each program activity set forth in the agency’s budget are to be presented in annual performance plans. The first such plans, covering fiscal year 1999, were submitted to Congress in the spring of 1998. Each performance plan will be followed by a performance report that compares actual performance with the goals set forth in the performance plan and explains the reasons for slippage in cases where goals were not met and, if the goal was impractical or not feasible, the reasons for that and the actions recommended. Finally, the report is to include the summary findings of program evaluations completed during the fiscal year covered by the report. Reviewing the Results Act’s requirements in light of traditional block grant design, we identified several questions that would likely arise in applying the Act to flexible programs. How can the Act take account of the federal goal of supporting state or local efforts and objectives and the limited agency role that accompanies this goal in traditional block grant design? When design features limit the federal agency’s ability to collect information through grantee reports, what performance measures can broadly flexible programs reasonably be expected to provide under the Act, and by what means? How can programs that contribute to a variety of measurable goals—goals also served by other programs—be fit into the reporting structure? In addition, we foresaw potential difficulties in discussing the “results” of flexible programs. The Results Act emphasizes measuring results in terms of program outputs, service levels, or outcomes, as opposed to the resources (inputs) and processes required to meet performance goals. (These terms and their relation to one another are explained in greater detail under Scope and Methodology, below). At the same time, the Act defines “outcomes” in terms of a program’s intended purpose, whatever that may be. This purpose-based definition is a source of potential confusion over terminology. For example, the resources available to a program would ordinarily be considered inputs. But if the program’s purpose was to leverage resources available to an activity, an increase in inputs would be that program’s intended output, outcome, or result. The potential for confusion increases when programs at more than one level of government are involved—for example, when a federal program supports state programs that, in turn, deliver services to clients. Although federal funds ultimately result in client outcomes, the federal program may focus on an intermediate outcome, such as increasing the quantity of state services or the number of clients served. Studies of the early implementation of the Act suggest that programs that do not deliver a readily measurable product or service are likely to have difficulty meeting Results Act performance measurement and reporting requirements. Intergovernmental grant programs—and particularly those with the flexibility inherent in classic block grant design—may be particularly likely to have difficulty producing performance measures at the national level and raise delicate issues of accountability. We set out to examine these potential difficulties and how they might be addressed. Drawing on the background materials summarized above, we defined a flexible grant program as one that offers state and local governments flexibility to define and implement a federal grant program in light of local needs and conditions. To identify flexible programs, we reviewed studies by GAO, the Congressional Research Service, and others on the block grants of the 1980s and program descriptions in the Catalogue of Federal Domestic Assistance and privately published grant catalogues. After creating a list of programs that appeared to offer flexibility, we eliminated programs that were narrow in scope, subject to detailed regulation, or relatively small in federal dollar terms (less than $100 million). Programs such as Temporary Assistance for Needy Families (TANF) that were too new to have produced performance reports or evaluation data were also eliminated from consideration. This winnowed the list to 21 programs, administered by 12 agencies located in 6 cabinet departments, as listed in table 1. A summary of each program is included in appendix I. These 21 programs were listed in the Appendix to the Budget for FY 1999 as follows: 3 grant programs (LIHEAP, Child Care and Development, and Social Services) each constituted a budget account, 11 were listed individually as a program activity within a budget account, 1 (Aging—Nutrition) was divided into two program activities (congregate meals and home-delivered meals), and The remaining 6 grant programs (2 SAMHSA grants, 2 CDBG grants, JTPA, and Child Welfare) were not listed as separate program activities. Our identification of performance-related program objectives and measures was guided by Office of Management and Budget (OMB) documents prepared to assist agencies in meeting the performance measurement requirements of the Results Act. OMB identified five aspects of performance, each representing a major step in the process of converting program resources into program results. These are inputs: the resources (dollars, staff, technology, capital) the manager has available to carry out the program or activity; activities: the actions through which program purposes are carried out (OMB uses the term “service delivery,” but we prefer “activities” because not all programs deliver services and because “allowable activities” listed in grant statutes are typically the basis for reporting.); outputs: goods, products, or services produced (amount, quality, quantity or other attributes, cost); outcomes: the results of a program (e.g., client benefits or program consequences) compared with its intended purpose; and impact or net impact: direct or indirect effects or consequences; outcomes that would not have occurred in the absence of the program. How these aspects of performance relate to each other in the typical service program is depicted in figure 1. As the lower part of the figure indicates, performance can be measured in terms of several underlying dimensions or criteria, such as quantity, quality, cost, or client reach (coverage of the targeted population). As mentioned previously, the meaning of any given measures is contingent on a program’s purpose. For example, if a program’s purpose is to leverage resources, its output would be measured in terms of dollars or other resources—units that are ordinarily considered inputs and that may indeed be inputs for a related program or activity. To avoid linguistic confusion, we base our terminology on what is being measured at the operating or service delivery program level. For example, we consistently call dollars to support service delivery an “input.” If such dollars function as output from the federal program perspective, we make this clear. We consulted the authorizing statute, regulations (if any), and other official guidance for each program. We obtained copies of reporting forms; examples of completed grantee and federal agency reports; information on databases utilized by the program; a report from each mandated national evaluation; and copies of other program evaluations, research or demonstration studies, or effective practice documents referenced in program literature. We also spoke with agency staff concerned with program management, evaluation, and performance measurement. Information from these various sources was converted to numeric codes accompanied by a text summary of design features (including flexibility and accountability) and of performance information, by source, for each program. We conducted our review from January through November 1997 in accordance with generally accepted government auditing standards. Our review focused solely on the federal level. We did not consult state or local officials of the programs we studied. Although we asked agency officials about, and noted the existence of, substantial variation across states in program implementation, we did not pursue these differences in any detail. It is also important to note that our analysis of program objectives and measures reflects conditions predating the submission of performance plans under the Results Act. Officials for several of the programs were in the process of rethinking objectives and measures in light of Results Act requirements but had not framed specific plans, and some programs were approaching reauthorizations that might result in major changes. Finally, in noting the strengths and weaknesses of information sources, we relied on comments by reviewers who had examined those sources and on our knowledge of such sources in general. We did not conduct independent evaluations of the data. We asked agency staff to review the program summaries prepared for the draft of this report for accuracy and completeness and incorporated corrections into the summaries as appropriate. Our review of flexible grant program features revealed that these programs differ substantially from one another. We found variation in the level of government to which key program decisions were delegated, management flexibility and constraints with respect to grant-funded activities, funding and related constraints, and availability of performance information. (A summary of each program, organized in terms of these characteristics and other key features, is included in app. I.) Flexibility varies in terms of who gets it (states, local units, or both) and the types of decisions covered and applicable constraints. Each of the programs we studied offered flexibility in at least one decision area of key importance for performance and accountability, and some offered flexibility in several areas. The performance-related decision areas we considered were distribution of funds to subrecipients: What entities will receive funds to carry out activities, and in what amounts? choice of activities: What allowable activities will funds support? allocation of grant funds across activities: How much will be spent on each? Although funds went first of all to the state in 19 of our 21 programs, 9 of them required that the bulk of state grant funds be further distributed to specified local entities. In some, like the two Special Programs for the Aging, the local entities operated under the umbrella of state planning and supervision. In others, such as the two education grants, activity selection and resource allocation decisions were lodged at the local level and the state was given a minimal supervisory role. Only two of our programs—Community Services and CDBG Entitlement—awarded funds directly to regional or local units of government. As is typical of federal grants, each of our programs listed allowable activities—that is, activities for which grant funds could be used. The activities listed were of a broad, general nature for some programs and quite specific in others, and in a few cases even included transferring funds to another grant program. State or local choice with respect to these activities, however, was subject to a variety of constraints. Some of the constraints we found placed on choice of activities included allowing only one major activity or group of related activities (e.g., Job Training Partnership Act programs); allowing only specified activities for which approaches of proven effectiveness were available, with exceptions permitted only when supported by data and analysis (e.g., State and Community Highway Safety); requiring one particular activity (e.g., specific activities to reduce access to tobacco products by persons under the age of 18 under the Substance Abuse Prevention and Treatment Grant, or laws requiring sex offenders to be tested for human immunodeficiency virus (HIV) if the victim requests it under the Byrne Formula grants) while allowing choice among other allowable activities; and requiring that each allowable activity be undertaken somewhere in the state (e.g., Community Services Block Grant). Discretion over funds allocation for many programs was constrained by caps (limits on the percentage of funds that could be spent on a given activity) or set-asides (required minimum percentages to be used for a specific activity). A number of programs also included fiscal provisions that constrained states’ use of their own funds—for example, by requiring that states “match” federal dollars with state dollars, maintain former levels of state spending, or use federal funds to supplement rather than replace or supplant state funds. Table 2 summarizes conditions of limited, moderate, and broad state flexibility for each decision area with respect to those areas and constraints discussed thus far. Among the programs we reviewed, only two (Social Services and Preventive Health) granted states broad flexibility on all three dimensions. The title VI Innovative Education Program delegated similarly broad flexibility over activities and resource allocation to the local level. Seven additional programs had at least moderate flexibility in all three areas, and no program was limited in all three decision areas. Combinations of flexibility and constraint took many different forms. For example, the Maternal and Child Health Services Program allowed broad flexibility with respect to subrecipients and activities, but included set-asides that directed the majority of funds to children’s services. The delegations of decision power that define each level of government’s role in managing program-funded activities also identify the aspects of program performance for which each can be held accountable. As we have seen, flexible programs in our study lodged decision power (and thus accountability) at the state and local levels to varying degrees and with varying constraints. We investigated whether the lines of delegation downward were accompanied by provisions (such as standards or reporting requirements) that established accountability for performance to the federal funding agency, and we found a mixed picture. We first looked for the inclusion of objectives, standards, and criteria for performance in program provisions. Fifteen of the 21 programs incorporated national operational standards, objectives, or criteria concerning some aspect or dimension of performance. Such objectives focused most commonly on activities (e.g., the requirement that the National Highway System meet federal approved design standards). However, six programs included service output objectives or standards (such as job retention standards under JTPA), and nine included outcome objectives (e.g., energy savings from home weatherization activities under LIHEAP). Thirteen programs, including those in the health areas, incorporated reference to state standards or required states to set objectives. Four included no reference to standards or objectives at any level of government. Finally, we examined data collection and reporting provisions, which establish who must report what to whom. Among our programs, four lacked authority to collect uniform data on performance from grantees. Eight did not require an agency performance report to Congress on the program, and two did not require state or local program or performance reports to the funding agency. Fifteen programs, including two that awarded decision powers to local entities, required state, but not local, reports. As past studies and our findings on data collection and reporting suggested might be the case, we found that among our programs, the program-specific performance information collected through program operations was limited. All but one, Child Welfare Services, collected data on some aspect of performance. However, about one-third reported only aggregate client counts and dollars spent on each allowable activity. Fourteen programs had service output data, and of these, only five obtained outcome data from program operations. In addition to varying in the ways just described, the programs in our study differed greatly from each other in terms of a few key design features—national objectives, nature of operations, and diversity of activities—each representing an important policy choice. We found that these features, singly and in combination, defined the flexibility given to grantees, accountability for performance, and likely availability of performance information. Our first key feature concerns the nature of the national objectives to be served through the federal grant program. We are not speaking here of such broad, ultimate national purposes as decreasing poverty, but rather of the more immediate, direct, and concrete objectives to be attained through the provision of grant funds. Grant programs’ objectives can be characterized as either performance-related or fiscal. Performance-related objectives focus on service or production activities and their results. In our study, we found objectives representing many aspects of performance measurable under the Results Act, including leveraging resources (input), improving service quality (activity), increasing coverage of targeted populations (client reach), and achieving specified service outputs or outcomes. For example, the central objective of the grants for Special Programs for the Aging—Nutrition Services is to provide nutritious meals (activity) to needy older Americans (client reach) so as to improve nutrition and reduce social isolation (outcomes). Fiscal or financial assistance objectives focus on providing dollars to support or expand activities. Typical fiscal objectives include increasing support for meritorious goods or underfunded services and targeting grant funding to needy jurisdictions. For example, the objective of the Title VI Innovative Education grants is to provide funds to support local educational reform efforts. In performance measurement terms, fiscal objectives translate into an emphasis on increasing inputs so as to increase the quantity of activities or outputs in general or to targeted clients or areas. The presence of performance objectives and provisions that implement them constrains flexibility, provides the basis for performance measurement and accountability, and signals a federal role in managing performance under the grant. When objectives are purely fiscal, accountability to the federal agency focuses on fiscal matters. For example, if the national objective is to encourage states to provide more of a nationally important service (like substance abuse prevention and treatment), states may be held accountable for using grant funds to supplement rather than to supplant their own spending on that service. A second critical feature concerns whether national objectives are to be achieved through a grant-specific operating program or simply through adding to the stream of funds supporting ongoing state or local programs. An operating program is a program in the commonsense meaning of the term. It has performance requirements and objectives and carries out distinct programwide functions through a distinct delivery system in such a way that grant-funded activities, clients, or products are clearly identifiable. Several of the programs we studied, such as the Aging—Nutrition Services program, were of this nature. Grants in our study that operated as a funding stream were not federal “programs” in this sense. Here, the federal agency provided funds that were merged with funds from state or local sources (and sometimes from other federal sources as well) to support state or local activities allowable under the flexible grant. The grant was one funding source among many, and the programs supported were state or local programs. For example, the Child Welfare Program supports state foster care, child care, child protection, and adoption and related services, the bulk of whose funding comes from other federal and nonfederal sources. Like performance objectives, we found that operation as a national program gave the federal agency a role in managing performance under the grant. Operation as a program also simplified the task of getting uniform information about performance attributable to grant funds. It made it possible to identify which activities were supported, the amount of federal funds allocated to each, and to various extents, the results of federal support. By contrast, we observed that in programs that operate as a funding stream, the activities supported were managed at the state or local level. In the words of agency staff, quoting state officials, “These aren’t federal programs, they are state programs that receive federal funds.” The federal agency’s role was limited accordingly, and it sometimes involved little more than seeing that applications for funding were properly submitted, compliance or audit issues were resolved, and money was disbursed in a timely fashion. Where grant-funded activities were managed at the local level, as in the two education programs we studied, title VI Innovative Education and Safe and Drug-Free Schools and Communities, the state’s role was similarly limited. Operation as a funding stream complicates the task of getting uniform, program-specific information. We found that when grant funds were part of a stream, it was possible to identify which activities federal funds supported and the amount allocated to each. But once added to the overall budget for a state or local activity, federal dollars lost their identity, and their results could not be separated out—particularly when the federal share was small. Thus, the only program outcome measures available were likely to be for the state or local service delivery program, not the federal funding program. The third key feature concerns diversity of activities. Having only one major activity, as in the Aging—Nutrition program, narrowed the scope of flexibility but eased the task of measuring and holding grantees accountable for performance. Finding a common metric for performance was rarely feasible for programs that funded activities that had little in common with each other from state to state. We found that these features tend to occur in four major combinations that have important implications for flexibility, accountability, and performance information. Examining how the design features were used in the 21 diverse programs we studied, we identified four major combinations or design types. We have summarized them in table 3, which shows design features, examples, and summary comments associated with each type. As the last column indicates, state or local flexibility and control over performance objectives and performance management increase as you move down the table. Grants of our first type pursue performance-related objectives through a distinct operating structure (top row). Grants in our study that exemplified this type were closest to the conventional notion of a “program.” They focused on a single major activity and included programwide performance objectives and, sometimes, service outcome objectives. Because of this, the agencies that administered grants in this group were able (with proper authorization) to collect nationally uniform information about performance from grantees. For example, the national objectives of the Job Training Partnership Act are to provide job training that leads to increases in employment and earnings of youths and adults facing serious barriers to participation in the work force. To evaluate the results of the program in achieving these objectives, the terms of the grant require recipient organizations to provide counts of activities provided, demographic characteristics of individuals served, employment outcomes, and program costs. Our second grant design type covers performance-related, funding stream grants (second row), which involve national performance objectives yet operate through state or local programs. Most programs of this type in our study covered a state or local function or delivery system (such as preventive health) involving various activities. National performance objectives typically concerned system improvement or capacity-building, ensuring access to services, service quality, and targeting of activities to priority populations. Several grants in this group require state or local grantees to set their own performance objectives of various kinds. Provisions of the Preventive Health and Health Services Block Grant, for example, require each state to fund activities related to Healthy People 2000 objectives and to measure and report the progress of the state in meeting the objectives selected. About half of the programs in this group provided information on program outputs. Our third type includes grants with fiscal objectives (third row) that provide support for program-like—rather than ongoing—state or local activities. These activities often take the form of projects—similar to operating programs in having clear boundaries, but with a clear start and finish as well. Grant provisions for some of our programs in this group included national criteria for selecting activities, such as the benefits test that applies to projects supported by Community Development Block Grants—Entitlement. Otherwise, performance objectives and measures were set at the operating level. Under the Byrne Formula (Drug Control and System Improvement) Grant Program, for example, states are required to set performance objectives for activities that are funded and to evaluate the success of these activities in achieving those objectives. Our fourth type concerns fiscal funding stream grants (bottom row). They allow a broad range of activities and represent the classic block grant design of the early 1980s. Consistent with their purpose, grants of this design in our study required only the information needed to determine how much was spent on each activity and to verify that funds were used for allowable purposes and that any requirements related to fiscal objectives (such as maintenance of effort) were met. Some of these programs made an effort to get service output information (such as client counts), but even this could be difficult. For example, where actual counts of recipients served are not available, the Social Services Block Grant program accepts counts based on estimation procedures that may vary in their statistical validity. These four design types present very different situations with respect to grantee accountability—what grantees are held accountable for and the level of government that is accountable for performance—and the information needed to support it. They also differ with respect to the information needed to support program decisions at the national level and prospects for getting this information through grantee reporting, as opposed to other means. As our previous report has noted, accountability is an elusive concept whose meaning depends on the context. At a minimum, all state grantees are accountable to the federal level for financial management and for using funds to support allowable activities, as verified through annual audits. Beyond that, the accountability of grant recipients to the federal level varies from grant to grant. We observed that the variation reflected the type of objective, and if performance objectives were involved, whether the federal level managed the program or merely added to the stream of funds supporting state or local programs. We describe the situation for each type of grant below, with a focus on performance issues. Accountability for performance to the federal level was most extensive in grants we studied that included national performance objectives and operated as distinct programs—grants with the most limited flexibility. As mentioned previously, programs of this type collected and reported information in line with their performance objectives, which were concerned with program implementation, outputs, or (when possible to measure) direct outcomes of services. (End outcomes are another matter, which we discuss in the next section.) Objectives, information, and reporting were similarly lined up in programs we studied that had primarily fiscal objectives and operated as funding streams. But here, accountability focused on fiscal matters. The funding agency was accountable for ensuring compliance with fiscal objectives. However, the activities funded were under state or local direction, and accountability for the conduct and outcomes of funded activities was to state or local authorities under whatever arrangements they had put in place. Federal reporting requirements were minimal, and performance information did not necessarily flow to the federal funding agency. The grants that combine federal performance objectives with operation through state or local programs present puzzling performance measurement and accountability issues, particularly for service outcome objectives. Activities supported with federal funds and the information collected about performance generally differed from state to state. (This difficulty affected fiscal-objective operating programs as well.) While state or local program outcomes in total were measurable for some programs, the component attributable to federal funding could not be separated out. Thus, measuring performance at the level of the federal program through grantee reporting was not feasible. For accountability purposes, measuring overall performance of the state or local program would not necessarily be appropriate, particularly when the federal grant contributes only a small fraction of the cost. However, state program data or even statewide indicators were sometimes adopted as performance measures, as in the Preventive Health program. Assuming that operation through state or local programs is feasible, how can national grant programs encourage the achievement of national performance objectives and encourage accountability for performance, yet respect state and local authority, interests, and differences? We found several approaches to this dilemma among our programs. Some approaches sought to strengthen accountability to the state or local agency that received federal funds. (They were designed to mitigate the risk that existing state or local oversight and management arrangements might be insufficient to ensure strong performance.) For example, the Child Care and Development Block Grant, which has a national objective of increasing service quality, directs states or localities to set service delivery or quality standards and monitor whether their own standards are being met. States and localities are then accountable to the federal agency for implementing these provisions. The Department of Education has been experimenting with a different approach. The Department grants temporary exemptions (waivers) from certain federal program requirements to states or school districts that demonstrate that the waiver will lead to educational improvements. These waivers are intended as a tool to expand the flexibility available to local school districts in exchange for increased accountability for student achievement. The results of this experiment are not yet in. One final example of an approach to serving national objectives through state or local activities relies on the techniques embodied in the Government Performance and Results Act—that is, requiring states or localities to set performance objectives for the activities or projects they choose to support with federal funds and to report to the federal funding agency on progress toward meeting those objectives. Provisions of the Safe and Drug-Free Schools and Communities Act, for example, require states and local education agencies to establish drug use and violence prevention-objectives, report the outcomes of state and local programs, and assess their effectiveness toward meeting the objectives. Under this “results” approach, accountability for performance remains at the level of the state or local agency doing the reporting, not the federal or state agency to whom the report is directed. The federal or state agency receives the information but does not use it for program management.This information, however, can be useful in assessing the degree to which national objectives for the program are being met, a subject to which we now turn. To make decisions about the programs they oversee, congressional committees are likely to need evaluative information—information that tells them whether, and in what important respects, a program is working well or poorly, as well as whether performance objectives are being met. As we noted previously, performance data collected from grantees can be an important source of information. Uniform data from program operations have the advantage of being program specific. However, collecting reliable uniform data at the national program level requires conditions—such as uniformity of activities, objectives, and measures—that are unlikely to exist under many flexible grant program designs. Even where overall performance can be measured, the amount attributable to federal funding often cannot be separated out. Additionally, some programs have ultimate outcome goals, such as increasing highway safety, which are measurable only through aggregate data. Finally, the time frame over which performance data are collected, typically 1 year, may be inadequate to capture long-term outcomes. More importantly, performance data from program operations cannot answer the full range of questions that are likely to arise during congressional oversight. We have found that Congress is also likely to need descriptive information that goes beyond the general summary level to convey a sense of the variety of conditions under which the program operates and how federal funds are actually being used—for flexible grants, information that shows how grant funds fit into the context of other programs is of particular interest; information about program implementation, including whether feasibility or management problems are evident and whether the methods used to deliver services are of known or likely effectiveness; information concerning positive or negative side effects of the program; and information that will help determine whether this program’s strategy is more effective in relation to its cost than others that serve the same purpose. Some of this information is likely to be available from federal agency staff, particularly if the agency plays an active oversight or technical assistance role. But much of it comes from other sources, including program evaluations, research and demonstration studies, and aggregate data. We found that agencies made use of these sources, both singly and in combination. Program evaluations are defined as individual, systematic studies conducted periodically or on an ad hoc basis to assess how well a program is working. Evaluations can address the extent to which program activities conform to requirements, how successfully a program meets its objectives, or the net effect it has on participants. Other types of evaluations can address program outcomes or impacts in comparison to the cost of producing them. Typically, evaluations gather performance information from a sample of sites under controlled conditions and are conducted by experts outside the program. Eight of the programs we studied have been evaluated on a national basis. Evaluations were done for programs of every type and purpose and focused on a variety of questions, as these examples illustrate. A 1994 evaluation measured the impact of the JTPA titles II-A and II-C programs by comparing program outcomes with estimates of what would have happened in the absence of the program. The study found that access to JTPA produced gains in earnings for adult men and women but did not significantly increase youths’ earnings or decrease their welfare benefits. The authors of the study concluded that youths might need more intensive services than adults or services of a different type. Using information from interviews, on-site reviews, and nutritional analysis of meals provided, a 1993-95 evaluation of the Aging—Nutrition program demonstrated that it had succeeded in targeting elderly who were at risk for poor nutrition and that participants had higher daily intakes of nutrients and more social contacts per month than a comparable group of nonparticipants. A study of a sample of district-level Safe and Drug-Free Schools programs in the early 1990s found that while some school-based drug prevention programs had small positive effects on student outcomes, implementation was characterized by variability in the services actually delivered, limited funds, competing demands on staff time, and the use of approaches that have not shown evidence of effectiveness. A 1994 evaluation of the CDBG—Entitlement Program examined data from 96 communities and concluded that they had the capability to implement the program effectively and were making beneficial use of the flexibility it afforded, as Congress intended. In addition to conducting programwide evaluations, nine agencies evaluated particular aspects of their programs, such as the injury prevention component of the Child Care and Development Block Grant. National program evaluations have the potential to answer questions about program performance in depth and provide an overall assessment of how effectively and efficiently a program operates in terms of its implementation, outcomes, impacts, and cost-effectiveness. However, national programwide evaluations are expensive in terms of dollars and time and frequently require capacities and resources beyond those provided for program management. Also, programwide evaluation data are typically periodic and often cover too few sites to support national estimates of performance. Although many programs encourage state and local evaluations, only one program we examined mandated programwide state-level evaluations, and only three mandated programwide local evaluations. Although these evaluations are potentially useful for state and local program managers and providers, we found they were limited in their ability to provide information on program performance on a national level. Reviews of state and local evaluations under the programs we studied indicated that such evaluations varied widely in scope and sophistication. In many cases, resources and capacities for conducting formal evaluations were limited. Programs tended to find these evaluations more helpful in identifying successful practices than in providing information about overall program effectiveness. Also, differences in evaluation questions and methodologies made it difficult to aggregate results to provide a national picture or to systematically compare the effectiveness of alternative projects aimed at the same objective. Information on the effectiveness of service delivery methods comes largely from research and demonstration studies. Knowledge to support effective practice is well established in some of the subject areas covered in our sample of grants and was incorporated into program provisions (such as service standards) or in companion technical assistance or knowledge dissemination programs. Information based on research can be used very effectively by programs when links between activities and outcomes are known. Among our programs, those related to the physical and biological science areas, such as health and transportation, had the most direct links to research and demonstration studies. For example, the Federal Highway Administration has approved standards and guidelines for construction projects to help build in safety and efficiency for the projects it helps support and funds activities to increase the knowledge base in areas related to transportation safety and efficiency. The Maternal and Child Health Program makes extensive use of research for all aspects of program operation, including training requirements for providers and the nature and extent of activities provided. Programs in the human services areas included in our study were less directly tied to research findings. Aggregate measures are survey or record-based data that describe the general status of a population or the availability of a product or service. Some of these data used by programs in our study, such as state vital statistics records, were developed independently but have proven to be useful indicators for related programs. Others, such as those developed by DOT, were developed expressly to serve as outcome indicators for federal programs. About half of the programs we examined (10 of 21) used aggregate data for purposes other than formula allocation. Programs in health and transportation, with objectives that address building or strengthening an entire service delivery system, have particularly drawn on such data. To assess state progress toward meeting the Healthy People 2000 health goals, for example, the Preventive Health program uses state-level data from a wide variety of federal and state reporting systems, including national health, transportation, and education surveys, and state records, such as cancer registries and vital statistics. DOT makes extensive use of aggregate data, including federal data from the Bureau of the Census, Bureau of Labor Statistics, and Environmental Protection Agency, as well as data from private organizations, such as the American Automobile Manufacturers Association’s Motor Vehicle Facts and Figures and the Eno Transportation Foundation’s Transportation in America. Aggregate measures of social, environmental, educational, or health outcomes can be useful in assessing the combined results of related programs whose individual impact cannot be readily disaggregated. Additionally, they allow uniform and independent comparisons over time and place little or no burden on service providers and resources. However, data collected by these measures have the disadvantage of not being program specific, and their connection to any particular program may be difficult or impossible to determine. In addition, programs that provide a relatively small contribution to overall resources in an area, no matter how well they operate, are likely to have very little effect on aggregate results. Thirteen programs used information from other sources along with, or as a substitute for, performance measures collected through program operations. The programs using these multiple sources had information that covered more aspects of program performance than programs that relied upon a single source. Data from different sources complemented each other in interesting ways. For example: DOT draws on data from a large array of sources to assess the state of the transportation system and the comprehensive results of its programs. For example, data from the Fatal Accident Report System, compiled by DOT from multiple sources, including state police accident reports, vehicle registration files, and emergency medical reports, are used to monitor DOT’s progress in meeting the national safety goals of its highway programs, including the Surface Transportation Block Grant and the State and Community Highway Safety Program. Data from HHS’ Health, United States, the National Safety Council’s Accident Facts, and the European Council of Ministers of Transit’s Statistical Report on Road Accidents are used with DOT data to measure trends and to compare accident severity in the United States with that in other countries. DOT uses findings from engineering research to approve design standards and to provide safety guidelines for construction and rehabilitation projects. Findings from human resource research are disseminated to encourage states and communities to fund education and prevention programs that have been successful. The Child Care and Development Block Grant has used information from a variety of sources to augment program data. For example, data from the Bureau of Census’ Survey of Income and Program Participation, including statistics on child care arrangements, population coverage, and costs, have been used to address the availability and affordability of child care resources. Findings from research and their practical applications for state-level child care policymakers are disseminated through symposiums to improve the quality of child care. Energy assistance questions of direct relevance for LIHEAP have been included in two national surveys, the Bureau of the Census’ Current Population Survey and the Department of Energy’s Residential Energy Consumption Survey. Program officials use these data to determine the characteristics of families participating in the LIHEAP program and to compare the energy consumption and expenditure patterns of all households, non-low-income households, low-income households, and LIHEAP recipient households. HHS’ Administration on Aging drew information on performance in the Aging—Nutrition program from a program implementation evaluation conducted by AOA and the Office of Inspector General that examined how well nutrition and client targeting objectives were being addressed; from compliance reviews conducted by regional office administrators that examined how states assess Area Agencies on Aging and service providers; from a major review of the research literature on nutrition and the elderly; and from the congressionally mandated national evaluation. AOA also developed a new, congressionally mandated database and standard reporting system that was designed to support an outcome orientation and develop definitions and reporting practices that could be used across an array of federal programs. Using data from different sources for these purposes can involve technical difficulties. Definitions and data collection conventions may vary from one source to another. Additionally, data are likely to have been collected at different points in time. Such differences must be taken into account when data from diverse sources are used together, or results might be misleading. We found that all of the information sources we described were more likely to be available when backed by statutory authorization and budget resources than when they were not. As we observed in our earlier study, Congress is more likely to get the information it asks for and pays for. Our study was prompted by interest in determining how existing flexible programs obtain information about performance as envisioned under the Results Act and what guidance we might offer with respect to (1) the treatment of such programs under the Results Act and (2) the design of future flexible programs—or redesign of existing programs—to help ensure that adequate information about performance is available. In summarizing the Results Act’s requirements, we noted three aspects of the Act that seemed of particular importance for flexible programs. They are its emphasis on (1) defining results in terms of program purpose, (2) aggregating activities sensibly for planning and reporting, and (3) employing alternative sources of information where performance was difficult to measure through program operations. We offer concluding observations on each of these points. In applying the Results Act, it is important to clarify whether federal objectives for a flexible grant program extend only to the initial stages of performance—enhancing resources or increasing the quantity of state or local services—or include the production of end results (such as client outcomes). The funding agency’s ability to influence or control state or local activities and their outcomes, given the program design, is also an important factor to consider in deciding whether the program can reasonably be linked to the achievement of end results in an agency’s performance plan. With respect to aggregation, the primary question is whether a given flexible grant program can reasonably be treated as a free-standing activity that contributes to a particular agency performance goal. A few of the programs we studied had performance goals unlike those of other agency-funded activities and could appropriately be treated in this manner. However, a number of others contributed toward client outcome goals or indicators that receive support through other agency-funded activities as well. In shared-goal situations, aggregation or consolidation seems preferable to treating the individual grant program as the unit of analysis. Aggregation and disaggregation decisions are likely to be particularly complicated for grants that contribute toward a wide variety of end-outcome goals. As we have seen, some flexible programs’ designs inherently limit the prospect of collecting programwide performance data through program operations. In applying the Results Act, it is important to recognize these limitations and to provide for information to be gathered through program evaluations and other sources, such as those we have illustrated. Our findings suggest that the design of a flexible grant program involves choosing among policy options that, in combination, establish the degree of flexibility afforded to states or localities; the relevance of performance objectives for grantee accountability; whether accountability for performance rests at the federal, state, or local level; and prospects for measuring performance through grantee reporting. Considering design features and their implications can help policymakers ensure that accountability and information are adequately provided for, whatever type of design is selected. To assist in this process, we have developed a framework that depicts the grant design policy choices discussed in this report and factors that might be considered at each point in the form of a decision tree (see fig. 2). Each choice has implications regarding the degree of flexibility provided to states or local entities, the type of performance information that can be collected through program operations, and the level at which this information is used for accountability purposes. The critical choice points in each decision path can be framed as questions, such as: Are national objectives primarily fiscal or performance-oriented? If objectives are of both types, both decision paths should be followed. What are these objectives? If there are national performance objectives, is a national program needed to achieve them, or could they feasibly be attained through state or local programs? This question is particularly relevant to new service outcome objectives, such as decreasing drug use among students. State and local programs designed with different objectives in mind may have difficulty incorporating this new objective. Or conditions that enable achievement of that outcome (such as solid knowledge of how to produce it) may not be met. What implementing provisions are needed to support attainment of these objectives? Implementing provisions might include constraints on activities and funds distribution or operational objectives, standards, and criteria for performance. These can be set for the program as a whole or delegated to the level of government responsible for program management. For state or local programs, the next question would be whether the program would operate as a funding stream or support distinct projects. This having been decided, the next general questions are: What data are needed for grantee accountability, and is it feasible to collect these data from providers? As we have seen, diverse activities and funding stream operation may make the collection of uniform data difficult. The answers to these questions provide the basis for setting grantee reporting requirements. Is additional information needed for program oversight? If so, the logical next step is to provide for such information to be gathered and reported through program evaluation studies or other relevant, cost-effective means. We use the title VI Innovative Education Program Strategies grant program to illustrate how figure 2 flows. The objectives of the grant, to support local education reform and innovation, are primarily fiscal, putting us on the upper decision path on our diagram. Funds may be used to support local projects (such as magnet schools), but the title VI program’s purpose does not require that project-level performance objectives be set, so we continue to the step of designing provisions to match fiscal objectives. Title VI has such provisions, stating that grant funds may not be used to supplant funds from nonfederal sources and that the state must maintain prior levels of fiscal effort. To obtain information required for accountability, the program requires local districts to describe their intended use of the funds and how this will contribute to the grant’s objectives of supporting education reform. States, drawing on district records, must report biennially on general uses of funds, types of services furnished, and students served. As these data are of limited utility for program oversight, Congress mandated national evaluation reports on this program in 1986 and 1994. The 1994 report provided information about federal share, the size of state and local grants, how funds were used, the minimal performance accountability requirements imposed by states, and the difficulty of evaluating a program that provides supplemental resources for other activities. The Safe and Drug-Free Schools and Communities grant provides a further illustration. Funds support local activities that serve national performance objectives to prevent violence in and around schools and the illegal use of alcohol, tobacco, and drugs. The presence of these objectives puts us on the lower, performance-oriented path of the flow chart. Funded activities are not implemented through a national operating program but, rather, through state and local programs, reflecting at least the hope that national objectives could be achieved through these programs. However, some national program provisions do apply. Local programs must be comprehensive and convey the message that the illegal use of alcohol and other drugs is wrong and harmful. These national requirements notwithstanding, the local education agencies are responsible for setting performance goals, deciding how to pursue them, and reporting to the state in terms of those goals. Moving along the state and local path on our diagram, we come to the question of whether drug and violence prevention programs function as distinct projects or as funding streams. The recent evaluation study suggests the latter. Examining what appeared to be comprehensive school-based drug prevention programs, this study found so much variation within districts in what was being done that local activities hardly met our definition of a “program.” As to the feasibility question on the diagram, collecting performance data—beyond student counts—for drug prevention programs has proven difficult. Reporting requirements make reference to local program outcomes, but states are simply asked to provide whatever relevant data they can. Reflecting these limitations, provision has been made to gather data from other sources, including state-level data from national surveys of youth drug use, for program oversight. Although the Department of Education is required to report on the national program every 3 years, the lack of uniform information on program activities and effectiveness may limit the report’s usefulness. The evaluation study, which covered the period 1990-1995, provided insight into the adequacy of resources, the extent to which activities reflect research findings, implementation issues, student outcomes, and state and local evaluations. Further evaluation studies are planned. We are sending copies of this report to the Ranking Minority Member of each of your Committees and the Chairman and Ranking Minority of the House Committee on Budget and Committee on Government Reform and Oversight. We will also make copies available to others on request. Please contact me or Gail MacColl, Assistant Director, at (202) 512-7997 if you or your staff have any questions. To reduce and prevent illegal drug activity, crime, and violence and to improve the functioning of the criminal justice system. Provides funds to state and local governments to carry out specific programs designed to improve the functioning of the criminal justice system, with an emphasis on violent crime and serious offenders. Funds are to be used to support activities in 26 areas that address the objectives cited above. These include education activities for law enforcement officials that are designed to reduce the demand for illegal drugs, multijurisdictional task force activities, improving correctional institutions, and prevention and enforcement programs related to gangs. States are required to allocate at least 5 percent of funds to improve criminal justice records. Beginning in 1994, states that don’t have a law requiring sex offenders to be tested for HIV if the victim requests such testing will lose 10 percent of their formula allotment. States are required to establish measurable objectives and evaluate projects in terms of achieving these objectives. Federal spending for 1997 was about $497 million, of which $25 million was made available for a drug-testing initiative. Each state receives the greater of either $500,000 or 0.25 percent of the amount available for the program. Remaining funds are distributed according to state population. In 1996, state awards ranged from $500,000 to $52 million. A 25-percent match on a project or on a governmental unit basis is required from state or local funds. Generally, locals are guaranteed a specified percentage of funds, based on the total share they contribute to state and local criminal justice expenditures. Regarding the remaining funds, states must give priority to localities with the greatest needs. The Byrne Program has contributed less than 1 percent of state and local criminal justice expenditures. No uniform provider data are required except for descriptions of funded activities, funding levels, and names of subgrantees. BJA generates national program information from on-site monitoring. A 1996 BJA programwide study analyzed the extent to which projects supported by Byrne Formula funds in fiscal year 1991 continued after Byrne funding ceased and identified factors associated with institutionalization. Project institutionalization rates were used to indicate how well the program was meeting its primary goal of supporting state and local law enforcement agencies. Other studies have included a BJA and National Institute of Justice analysis of state strategic planning efforts and evaluations of 56 projects. Many state and local evaluations have been conducted, but their results are difficult to aggregate owing to differences in methodologies and outcome measures. To make grants available to states, territories, and tribes to increase the overall quality, affordability, and supply of child care. Direct services are targeted to children in low-income families with parents who work or attend job training. Provides funds to states, territories, and tribes for child care services and quality improvement and to increase the supply of child care. States must allow the full range of parental choice of child care providers, including center-based, group home, family, and in-home care, by offering certificates that parents can give to the provider of their choice. States are required to set health and safety standards and monitor providers. States must ensure that parents have unlimited access to their child and child care providers, provide consumer education services, and maintain public records of complaints made against child care providers. Not less than 4 percent of funds must be used for quality improvement activities and to increase the supply of child care. Federal fiscal 1997 funding for the block grant was about $956 million. Historically, CCDBG did not require state matching funds or maintenance of fiscal effort. In fiscal year 1997, three other child care programs were repealed and their funding was consolidated under the provisions of CCDBG. Three separate funding sources for CCDBG were initiated. The new Mandatory Fund and the Discretionary Fund (formerly CCDBG) require no state match. The new Matching Fund provides federal dollars to match state spending according to a formula reflecting the proportion of children in the state under age 13, if the state complies with various fiscal requirements. The fiscal year 1997 funding from the three sources, collectively known as the Child Care and Development Fund, was about $3 billion. State reports provide state-level data specific to CCDBG as well as data on other federal child care and preschool programs. Information is reported on the number of children assisted according to the category of provider, how assistance is made available to families (i.e., through grants, contracts, or certificates), and estimates of the number of families receiving various forms of consumer education. Information on income, size, structure of, and reasons for families receiving services is also collected and reported. Formerly, counts of child care programs, caregivers, salary data, partnership activities to promote business involvement, results of state monitoring, and reductions in child care standards were collected. Information on national child care needs, costs, availability, and quality is available from the Bureau of the Census, the Department of Education, and many private research and advocacy organizations. No programwide evaluation has been conducted. To establish, extend, and strengthen child welfare services provided by state and local agencies to enable children to remain in their own homes or, when this is not possible, to provide alternative placements. Supports state child welfare programs. States may provide services directly or through subgrantees. Funds may be used for a broad array of child protective services, including costs of personnel to provide services, licensing, and standard-setting for child care agencies and institutions, homemaker services, return of runaway children, child abuse prevention, and reunification services. Funds for foster care, day care, and adoption services are capped to the amounts received by states in fiscal year 1979 for child welfare programs. States must provide assurances that all children in foster homes receive certain specific protections, including maintaining a statewide information system for children in foster care, establishing due process protections for families, and conducting periodic case reviews. States are required to submit a description of the quality assurance system they will use, but not the data produced by the system. ACF reviews of the foster care systems in each state are no longer required to verify the implementation of foster care protections. Federal spending in fiscal year 1997 was about $292 million. Each eligible jurisdiction receives a base amount of $70,000. Additional funds are allocated by formula. States receive federal matching at a rate of 75 percent of their expenditures up to the limit of the state’s allocation. In fiscal year 1996, state grants ranged from about $118,000 to $21.4 million. The average amount was $4.4 million. Amounts from this grant program are small in comparison with child welfare funding from other federal and nonfederal sources. Performance reports are not required, and program-specific performance data are not available. State consolidated plans include descriptions of the services to be provided and of the geographic areas where these services will be available. All states administering related programs under title IV-B, Subpart 1: Family Preservation, and Subpart 2: Support Services, or title IV-E: Foster Care and Adoption Assistance are required to maintain data systems to track cost, type, and level of care; staff management and training; entry and exit rates of children in substitute care; and intake information. These data cover the state program, not just services funded by this grant. Before 1995, states were not required to submit data to ACF. To develop viable urban communities by providing decent housing, a suitable living environment, and expanding economic opportunities, principally for persons of low and moderate income. To foster well-planned, coordinated housing and community development activities by providing a consistent source of federal assistance to cities and urban counties. Provides funds to central cities and urban counties. Entitlement communities develop their own programs and funding priorities. Communities may undertake a wide range of activities directed toward neighborhood revitalization, economic development, and provision of improved community facilities and services. Activities must either benefit low- and moderate-income persons, help eliminate slums or blight, or meet other community development needs having a particular urgency. Funds can be used as the nonfederal share of other federal program grants. Restrictions on the percentage of funds used to establish or expand public services apply. Federal funds allocated in fiscal year 1997 were about $3.06 billion. In that year, 975 entities were eligible to receive funds according to a statutory formula. No matching funds are required. Targeting requirements ensure that communities use program funding to benefit low- and moderate-income persons. Grantees have to certify that at least 70 percent of program funds, over a period of 1, 2, or 3 years, will benefit low- and moderate-income persons. Aggregate and individual public benefit tests are applied to economic development activities. Generally, for each activity, at least one job must be created per $50,000 of CDBG aid or one low- or moderate-income person must be served for each $1,000 of aid. Additionally, on an annual basis, the aggregate of a grantee’s economic development activities must create one job per $35,000 of CDBG aid or serve one low- and moderate-income person per $350 of CDBG funds used. Grantees complete an annual performance and evaluation report that includes project-level information on accomplishments, costs incurred by participating entities, indications of how a grantee would change projects as a result of its experience, and an evaluation of how funds were used to benefit low- and moderate-income persons. External data (e.g., the Bureau of the Census’ Population and Housing Survey) are used for formula allocations and benchmarking purposes. Several evaluation studies were conducted. A 1994 national evaluation by the Urban Institute addressed the capacity, flexibility, and political effects of the program. Other evaluations have focused on specific activities, such as revolving loan funds. To develop viable urban communities by providing decent housing, a suitable living environment, and expanding economic opportunities, principally for persons of low and moderate income. To foster well-planned, coordinated housing and community development activities by providing a consistent source of federal assistance to units of general local government. Provides funds to states, which develop their own programs and funding priorities. These priorities guide states’ redistribution of funds to units of general local government that are not populous enough to receive entitlement funds. Forty-eight states and Puerto Rico participate in this program. Most funds are distributed competitively, though four states distribute a portion of their funds according to a state-developed formula. (Two states have chosen not to participate in this program. In New York and Hawaii, HUD continues to distribute funds directly to units of general local government through the HUD-administered Small Cities CDBG Program.) CDBG encompasses a wide range of activities directed toward neighborhood revitalization, economic development, and provision of improved community facilities and services. Activities must either benefit low- and moderate-income persons, help eliminate slums or blight, or meet other community development needs having a particular urgency. Arizona, California, New Mexico, and Texas are statutorily required to set aside 10 percent of their funds for projects in colonias—communities in the U.S.-Mexico border region that lack sanitary water, sewage facilities, and housing, and that existed as colonias before this program was created. Fiscal year 1997 spending was about $1.2 billion. Seventy percent of the aggregate use of funds, over a period specified by the state of 3 years or less, must benefit low- and moderate-income individuals. States are required to establish objectives consistent with the national objectives and to report progress in meeting these goals. State annual performance reports include a description of the use of funds during the program year, an assessment of the relationship of that use to the states’ objectives, the reason for any changes in the plan, and indications as to how the program would change as a result of this experience. States determine how they collect information from units of general local government receiving grant funds. In the early days of the state CDBG program, HUD conducted several studies on state takeover of small-city CDBG funding. These studies examined changes states made in program priorities and processes, analyzed the effects of these changes on funding patterns, and compared states’ initial experiences and performances. A later study evaluated the success rates of economic development loans made under this program to businesses to guide future investment strategies. To enable states and territories to plan, carry out, and evaluate state plans for providing comprehensive community mental health services to adults with a serious mental illness and to children with a serious emotional disturbance. Provides financial assistance to states to be used at their discretion consistent with program objectives and requirements. Services are to be provided only through community mental health centers that meet certain criteria and through other appropriate, qualified community programs. State plans must provide for an organized community-based system of care that considers all available resources and services (however funded), including rehabilitation, employment, housing, educational, medical and dental, and other support services needed to enable clients to function in the community. Plans must also provide for case management services for clients that receive substantial amounts of public funds or services, integrated services for children, and outreach to and services for the homeless. Inpatient services are not eligible for support. States must review 5 percent of service providers each year and establish a Mental Health Planning Council to review the state plan and monitor and evaluate the allocation and adequacy of mental health services annually. Federal spending for fiscal year 1997 was $275 million. Awards range from about $50,000 to $33 million, with an average of $4.4 million. As of 1993, block grant funds were around 5.6 percent of state mental health agency revenues for community programs. Maintenance of fiscal effort provisions apply to expenditures for children as well as to overall expenditures for community mental health services. State reports (included in the application) describe achievements in relation to state objectives (including quantitative targets) for the year just completed, which are to cover each of the program criteria summarized above. Applications include incidence and prevalence data on mental illness among the target populations using standard definitions; standard measures are not yet available but are under development. Data on community mental health services, treatment options, and resources are also included. There has been no national evaluation of this program. Annual program reviews are conducted by State Mental Health Planning Councils, but Council members are generally not experts in evaluation, and their reviews may or may not be accompanied by backup information. The funding agency sponsors research on prevention and service delivery models in mental health and conveys findings to grantees as part of its technical assistance activities. To provide services and activities that have a measurable and major impact on the causes of poverty. Objectives include assisting low-income individuals to obtain adequate jobs, education, and housing; make better use of available income; obtain emergency assistance when needed; remove obstacles to self-sufficiency; and achieve greater participation in community affairs. Other objectives include establishing coordination between social service programs and encouraging private sector entities to ameliorate poverty. Provides funds to support local activities and projects. Goals and objectives are set by states, but states are required to subgrant at least 90 percent of their allotment to locally based community action agencies or organizations that serve migrant or seasonal farmworkers. Activities that fall within seven broad service categories, reflective of the program’s objectives, are eligible for funding, provided that the principal beneficiaries are persons of low and modest income levels. At least one activity of each type must be provided within a state. Federal spending in fiscal 1997 was $490 million. States are required to ensure that any agency or organization that received funds previously under this program will not have future funding terminated or proportionally reduced unless the state can determine cause under conditions and procedures set by federal mandate. Five percent of funds can be transferred to certain other federal block grants. No maintenance of effort or state matching funds are required. In fiscal year 1996, CSBG financial assistance to states ranged from $2.2 million to $346 million. Overall, CSBG has contributed less than 10 percent of the resources managed, leveraged, and coordinated by the community action agencies. States and local entities are not required to provide uniform performance data. ACF has relied on contracted private entities to survey states on a voluntary basis to obtain information describing state allocations, local activities, operations of state CSBG administering agencies, state managerial and programmatic accomplishments, and counts of dollars spent and individuals served. At present, a contract is in place to establish a new data collection system. No national program evaluations have been conducted. We found no ties between external, aggregate income data or research findings and program operations and assessments. To establish programs to prepare disadvantaged adults (title II-A) and youths (title II-C) for participation in the labor force by providing job training and other services designed to increase employment and earnings, develop educational and occupational skills, and decrease welfare dependency. States receive formula grants and, in turn, subgrant funds to Service Delivery Areas (SDAs)—geographical areas that include one or more local governments or a state that has been designated to provide job training—according to a federal formula that reflects unemployment and poverty rates. Within each SDA, a private industry council works with local governments to develop job training plans that meet local needs, select groups that will receive grants, and act as the administrative agency for the SDA. States have responsibility for the approval of the plans and monitoring for compliance. Minimum performance standards and measures for SDAs are set at the federal level. Funds support direct and on-the-job training, education, job counseling, and supportive services. States are required to set aside 5 percent of funds to provide incentive payments to SDAs that exceed performance standards and 8 percent to support state education coordination and grants. At least 50 percent of each state’s allotment must fund direct training services. Additionally, 5 percent of title II-A funds are set aside to support activities for older individuals. Services are targeted to economically disadvantaged individuals who face serious barriers to employment. Federal spending in program year 1997 was about $895 million for title II-A and $127 million for title II-C. Matching is required for 100 percent of the 8-percent state education grants. States, administrative entities, and recipients are required to report information, including descriptions of activities provided and the length of time participants were engaged in them; characteristics of participants; and outcome measures, such as the occupations in which participants were placed. These programs have been the subject of several nationwide and state-level evaluations. For example, a 1994 national evaluation examined program impacts on the earnings and employment of adult men and women and out-of-school male and female youth. This study found that effects for adults were positive but that the program did not increase the earnings of male and female youths, which suggested that new ways were needed to serve some groups. Other evaluations have studied differences in cost-effectiveness between programs in urban and rural areas and the effectiveness of adult work-place literacy techniques. To provide funds to states so that they can subsidize the home energy costs of low-income persons, including the elderly and disabled. Awards funds to states, which, in turn, either distribute them to eligible households or to an energy supplier on behalf of such households. In addition to providing direct and indirect subsidies, up to 15 percent of funds may be used for low-cost residential weatherization. Another 10 percent may be allocated for weatherization if states can demonstrate that they meet three statutory requirements, including that the proposed weatherization services will produce savings in energy costs. States may use up to 5 percent of their total allotment to encourage and enable households to reduce their heating and cooling needs. Leveraging Incentive Funds may be awarded to states that supply additional benefits to eligible households beyond those provided through federal funds. Up to 25 percent of the incentive funds may be set aside for grantees that provide LIHEAP services through community-based nonprofit organizations to help eligible households reduce their energy vulnerability under a program known as the Residential Energy Assistance Challenge. States are required to provide the highest level of assistance to households with the lowest incomes and the highest energy costs, taking family size into account. Federal spending for fiscal year 1997 was about $1.2 billion. No matching funds are required. An annual report is required on the number and income level of households served; the number of participating households with individuals who are elderly, disabled, or with young children; and the number and income level of families who applied for assistance. An additional report identifying services that were provided, number of households served, level of benefits provided, and number of unserved households is required from grantees that expend up to 5 percent of funds for services designed to reduce home energy needs. To supplement program information, HHS has used voluntary state surveys to gather estimates of households to be served, funds available, funds to be obligated, and income eligibility cutoffs. To qualify for leveraging incentive funds, grantees must report on the leveraged resources provided to low-income households during the previous base period. No programwide evaluations have been conducted. Specific energy assistance questions have been included in two national surveys, the Bureau of the Census’ Current Population Survey and the Department of Energy’s Residential Energy Consumption Survey. These data are used to determine the socioeconomic characteristics of LIHEAP participants and energy consumption and expenditure patterns of all non-low-income, low-income, and LIHEAP recipient households. (Project-level evaluations are required for activities funded by the Residential Energy Assistance Challenge option.) To enable states to maintain and strengthen their role in planning, promoting, coordinating, and evaluating health care for pregnant women, mothers, infants, and children (particularly children with special health care needs) and in providing health services for mothers and children who do not have access to adequate health care, particularly those from low-income families. Primarily, the grant assists states in building a maternal and child care health service infrastructure that ensures needed services are in place for and readily accessible to vulnerable populations. States have the flexibility to allocate resources. Fifteen percent of the block grant is set aside for special projects of regional and national significance and for integrated community service system programs. States may use block grant funds to develop systems of health care and related services, such as health education, case management, training, and the evaluation of maternal and child care services, and to deliver clinical care to the target population. At least 30 percent of funds must support preventive and primary care services for children, and an additional 30 percent must be used for services to children with special health care needs. Federal spending in fiscal year 1997 was about $681 million. Any amount appropriated over $600 million is retained by the Secretary of Health and Human Services to fund specialized projects and activities in areas with high infant mortality rates. In fiscal year 1996, assistance to states ranged from around $155,000 to $41.9 million. The average state grant was $9.7 million. States must ensure that $3 of state and local funds or resources will be expended for maternal and child health for each $4 of federal program funds. State and local contributions are generally twice that of the federal contribution, but large variations among states exist. In general, state contributions have to equal at least the amount paid in 1989. States are required to report annually. Program-specific state-level data reported include number served by population category, proportion of each category with health insurance, type of services provided, and expenditures by service and population type. Beginning in fiscal year 1999, all states must report on 18 national performance measures. State annual reports also include statewide data on the number of medical service providers by category, number of births, infant mortality by race and ethnicity, percent of low-weight births by race and ethnicity, perinatal death rates, rates of fetal alcohol syndrome, rates of infant drug dependency, percentages of women without prenatal care by trimester, and immunization rates for 2-year-old children. No programwide evaluation has been completed, although various components of the program, such as injury prevention, have been evaluated. A large research base supports grant activities. To provide for the construction and improvement of interconnected principal arterial routes that serve major population centers, international border crossings, ports, airports, public transportation facilities, and other major transportation facilities and destinations; meet national defense requirements; and serve interstate and interregional travel. Assists state transportation agencies in developing an integrated, interconnected transportation system. States can transfer up to 50 percent of NHS funds to the Surface Transportation Program (STP), and if the Secretary of Transportation approves, up to 100 percent. Funds may support 14 categories of transportation and transportation-related activities on roads designated as part of the Interstate System or other principal arterial highways. Activities include highway construction, safety and operational improvements, reconstruction, resurfacing, highway research and development, fringe and corridor parking, and wetland mitigation projects related to highway projects. States are required to perform a life-cycle cost analysis and a value engineering analysis for each NHS project segment that costs over $25 million and to meet design standards approved by FHWA. A state may request exemption from FHWA’s detailed oversight of design and construction activities, including approval of preliminary plans, specifications, and estimates; concurrence in the award process; construction reviews; and final inspection. Projects have to comply with the Clean Air Act and meet DOT and Environmental Protection Agency targets. Ten percent of funds must be expended through contracts with small businesses owned by disadvantaged persons. Federal spending in fiscal year 1997 was about $3.3 billion. Generally, federal funds can be used to cover up to 80 percent of project costs, but certain projects can be funded at higher federal shares. State apportionments of federal funds are affected by a variety of incentives and sanctions. Financial information is compiled on individual projects as well as the overall program. Performance information is compiled as part of the biennial assessment of the nation’s highway conditions. DOT compiles aggregate information from other agencies on transportation facilities, services, flow, context, and the unintended consequences of transportation (safety, energy use, and environmental impacts). No national programwide evaluations of this program have been conducted. Projects and particular program components have been evaluated. To provide states, territories, and certain tribal governments with the resources to improve the health status of their populations. Contributes funds toward the support of state-directed preventive health services. Funds go to state health departments, which have discretion to make awards to local health agencies and community-based organizations. Supports activities to improve the health status of the population so as to meet Healthy People 2000 national health promotion and disease prevention objectives; rodent control and community fluoridation programs; planning, establishing, and improving (but not simply operating) emergency medical services; services to the victims of sex offenses and prevention of sex offenses; and related administrative and evaluation activities. Each state selects the Healthy People 2000 objectives to be addressed with grant funds. Most states use funds for cardiovascular prevention, community-based health promotion activities, and rape prevention. Beyond that, each state does things differently. Federal funding for state grants for fiscal year 1997 was about $148 million. State awards ranged from about $31,500 to $10 million, with an average of $1.4 million. Maintenance of fiscal effort is required. Although PHHS grant funds constitute about 1 percent of federal and state public health expenditures overall, they are a major source of funding for preventive health activities. For each activity funded, the state reports activity or output data, such as number of community programs supported or number of clients served by the state program. For each Healthy People 2000 objective selected, the state also reports statewide data as measured by Healthy People 2000 indicators drawn from such uniform data sets as vital statistics, the Behavioral Risk Factor Surveillance System, or the state cancer registry. A federal contractor collects the data from federal sources and sends them to the states, which then fill in state-generated information. No national program evaluation has been conducted. However, CDC has assessed the effectiveness of and published guidelines for numerous preventive health services. Such guidelines on effective preventive health practice get incorporated into this program through professional, rather than administrative, channels. Standards of practice are very concrete for some areas, such as immunization, and less fixed in others, such as health promotion. To support programs aimed at meeting the national education goal of preventing illegal drug use among students and violence in and around schools. Funds are awarded to state education agencies (SEAs), but not less than 91 percent of the SEA money is then distributed by formula to local education agencies (LEAs) to support drug and violence prevention programs under their direction. For some LEAs, the grant is one of several sources of funds for drug prevention activities; for others, it is the sole source of funds. Both SEAs and LEAs must identify goals and objectives for drug and violence prevention. LEA funds can be used for comprehensive drug and alcohol prevention programs (including instruction, family counseling, early intervention, referral to rehabilitation, staff development); for educational, cultural, and recreational activities before and after school; and for evaluation. Not more than 20 percent of the funds can be spent on safety-related activities, such as “safe zones of passage,” school metal detectors, and security personnel. SEA funds may be used for administration, technical assistance, demonstration projects, evaluation and other supporting activities, or to meet special needs. All programs supported under the grant must convey the message that illegal use of alcohol and other drugs is wrong and harmful. Federal spending in fiscal year 1997 was about $531 million, of which $415 million went directly to SEA and LEA activities. Awards ranged from $2 million to $46 million, with an average of $8 million. In the districts included in the national evaluation study, LEA drug prevention program funding averaged $6-$8 per pupil from grant funds and $10 per pupil from all sources. Maintenance of fiscal effort provisions apply. States are required to report triennially on activities funded and number of LEAs, schools, and students participating. State reports also cover program effectiveness and progress toward achieving SEA measurable goals and objectives, using whatever outcome information the state can provide. LEAs provide information the SEA needs to complete its report. SEA reports also include data on violent incidents in all schools and state-level survey data on the incidence and prevalence of drug use among students. A national evaluation study conducted during 1990-95 examined drug prevention program activities (comparing them with research-based evidence of effective practice) and local program evaluations and collected data on student outcomes in 19 school districts. The program statute requires an independent biennial evaluation of the national impact of the program. To assist states to provide social services that are directed toward one of the following broad goals: (1) achieving or maintaining self-support to prevent, reduce, or eliminate dependency; (2) achieving or maintaining self-sufficiency to reduce or prevent dependency; (3) preventing or remedying neglect, abuse, or exploitation of children and adults; (4) preventing or reducing inappropriate institutional care; and (5) securing admission or referral for institutional care when other forms of care are not available and providing services to individuals in institutions. Assists each state to furnish social services according to state-determined priorities. States can use funds to support any of a broad range of social services. For example, funds may be used to provide activities needed to operate or improve other social service programs; pay for administrative, staff, and training costs; or support agency operations. Some restrictions, including prohibitions regarding the use of funds to provide cash payments as a service, apply. There are no set-asides or caps. Federal funding for fiscal year 1997 was about $2.5 billion. Ten percent of funds may be transferred to support activities funded by related federal block grants (Preventive Health and Health Services, Substance Abuse Prevention and Treatment, Community Mental Health, Maternal and Child Health, and Low-Income Home Energy Assistance Program). Additionally, the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 allows states to transfer up to 10 percent of the new Temporary Assistance for Needy Families Block Grant to this grant. At the federal level, there are no performance criteria or standards. In fiscal year 1996, allocations to states ranged from $97,000 to $333 million. The average allotment was $50 million. States are asked to provide counts of services provided, number of adults and children served, expenditures by service type, and type of organizations that provide the services. These data may be actual, sampled, or estimated. No outcome or impact data are available. A major programwide qualitative evaluation, published in 1992, addressed the first 10 years of the grant. This evaluation examined, within the context of flexibility, the perceptions of state and county officials regarding the effectiveness of the program and identified innovative and successful state practices. Other evaluations examined specific activities funded by the grant, such as one that compared services provided to senior citizens under this grant with services furnished under the Administration on Aging’s Supportive Services and Senior Centers Program. To support nutrition services to older Americans, including meals, nutrition education, facilitating access to meals, and providing nutrition-related supportive services to maintain health, independence, and quality of life. Provides grants to State Agencies on Aging. States, using their own formulas (which AOA must approve), distribute funds to state-designated Area Agencies on Aging (AAAs). States are responsible for ensuring that AAAs and service providers meet program requirements and assurances as set out in their area plans and comply with state and local laws regarding food handling and the like. Local projects funded by this title III-C program must provide at least one hot or other appropriate meal, which provides one-third of the recommended dietary allowance (RDA) at least once a day, 5 days a week (except in rural areas, where a lesser frequency is determined feasible), to eligible people over age 60 and their spouses. AAAs must reasonably accommodate participants’ special dietary needs. Meals may be provided in a congregate setting or delivered to the home. For home-delivered meals, priority is given to serving frail elderly, the homebound, or the isolated. Projects must conduct outreach and nutrition education activities. They may solicit voluntary contributions. States must utilize the advice of dietitians in program planning and provide technical assistance and training for program staff. Federal spending for fiscal year 1997 included $364.4 million for congregate meals and $105.3 million for home-delivered meals. For the two meal types combined, awards ranged from around $126,000 to $43.7 million, averaging about $8 million. This program has maintenance of fiscal effort requirements. States may transfer up to 30 percent of their funds between their congregate and home meal programs. They may also transfer 20 percent of their funds to title III-B: Supportive Services and Senior Centers Program. Under a system developed in response to 1992 legislation, states report uniform data, including number of congregate and home-delivered meals served, program income and expenditures, number of persons served, and number who were at high risk for poor nutrition. Client characteristics such as age; poverty; and, for home-delivered meals, client’s extent of impairment in performing activities of daily living are also reported. Client and service counts are totals for the service as a whole, not just the portion funded from this grant. A national evaluation of this program for 1993-95 described the participants, assessed how well the program reached the disabled and poor elderly, and estimated the impact on nutritional intake and social contacts of participants as compared with nonparticipants with similar characteristics. The RDA requirements are based on research, which also supports the premise that good health requires adequate nutrition. To encourage states and area agencies on aging to develop and implement comprehensive and coordinated community-based services for older individuals through the planning and provision of supportive services, including multipurpose senior centers. Provides grants to State Agencies on Aging. States, using their own formulas (which AOA must approve), distribute funds to state-designated Area Agencies on Aging. States are responsible for ensuring that AAAs and service providers meet program requirements and assurances as set out in their area plans. This title III-B program covers a wide range of supportive services from homemaker and chore services to recreation and crime prevention. Special priority is given to providing services that provide access to other services (such as transportation, outreach, information and assistance, language, and case management); in-home services; and legal services, such as legal representation for wards in guardianship proceedings. Funds can also be used for renovation, acquisition, and construction of multipurpose senior centers. AAAs must set specific objectives for providing services to individuals having the greatest economic or social need, particularly low-income minority individuals. States must support an effective ombudsman program. Federal spending for fiscal year 1997 was $291 million. Awards ranged from around $70,000 to $27.1 million, with an average of $5.1 million. Each state is guaranteed a minimum allotment; beyond that, funds are allotted based on the proportion of individuals aged 60 and older in each state. Within-state distribution formulas must also reflect the proportion of individuals 60 and over. Federal funds cover 85 percent of the cost of supportive services statewide; the state must contribute not less than 25 percent of the nonfederal share from state or local public sources. The amount states may set aside for conducting outreach demonstration is capped at 4 percent of funds allotted, after paying for area plan administration. Maintenance of fiscal effort provisions apply, and program funds are to supplement, not supplant, other sources. States may transfer up to 20 percent of funds between this program and the senior nutrition program. Service providers can solicit voluntary contributions, but the contributions must be used to increase services. Under a reporting system developed in response to 1992 legislation, states now report service unit counts, unduplicated client counts, and expenditures by type of service and detailed client characteristics (including indicators of ability to perform activities of daily living). Client and service unit counts are totals for the service as a whole, not just the portion funded from this grant. There has been no national evaluation of this program, and program documents incorporate little reference to research. To reduce traffic accidents and deaths, injuries, and property damage resulting from accidents. These grants help state safety agencies develop programs to further national and state highway safety objectives. At least 40 percent of a state’s allocation must be passed through to its subdivisions or used by the state on behalf of localities. NHTSA/FHWA have identified nine highway safety program areas of national priority for which effective countermeasures have been developed. Activities in these areas (including alcohol and drug countermeasures, occupant protection, emergency medical services, and roadway safety) are eligible for funding. Before fiscal year 1998, states proposing such activities had to describe the problem, identify the countermeasure designed to stabilize or reduce it, and provide supporting trend data. If states funded identified countermeasures for priority problems, funding was expedited. If funds were to be used for other problems, additional data and analysis had to be submitted to NHTSA or FHWA for approval. NHSTA funds, accounting for about 90 percent of the grant, were used for projects related to human behavior, and FHWA funds were used for roadway safety. Pedestrian, bicycle safety, and speed control programs were jointly administered by both agencies. Beginning in fiscal year 1998, a new performance-based process was established. States are now responsible for setting highway safety goals and implementing programs to achieve them. Federal spending in fiscal year 1997 was $140 million. Matching funds in amounts that vary by activity and circumstances are required. No match is needed for the U.S. territories and Native American programs. If states do not have a highway safety plan that conforms to statutory provisions, formula funds are reduced by not less than 50 percent. A state may receive additional funds under a related incentive program if specific criteria are met. In 1997, financial assistance to states ranged from $340,000 to $13 million, with an average of $2.2 million. The federal share of funding for all state and local highway traffic safety programs is relatively small, generally ranging from 1 to 3 percent. Before fiscal year 1998, states were required to submit annual evaluation reports on activities and projects funded under this program. For each funded program area, states were to describe each project, project-level costs, accomplishments, and status. Beginning in fiscal year 1998, states were required to submit annual reports describing progress in meeting highway safety goals, using identified performance measures. States collect and report aggregate data on highway deaths and injuries. NHTSA’s first national evaluation of its state grants programs is now in progress. The evaluation will examine whether projects focused on major safety and program needs, the consequences of removing federal highway safety grants, and whether results were compared with planned objectives. To provide financial assistance to states and territories to support alcohol and other drug abuse prevention, treatment, and rehabilitation activities. Provides funds to be used at the state’s discretion to achieve statutory objectives, including the fulfillment of certain requirements. States set criteria for particular treatment services. At least 35 percent of the state’s grant funds must be used for prevention and treatment activities related to alcohol, at least 35 percent for activities related to other drugs, and at least 20 percent for primary prevention services. States must increase the availability of treatment services for pregnant women and women with dependent children, establish a treatment capacity management program to facilitate admissions of intravenous drug users, make tuberculosis services available to individuals receiving substance abuse treatment, establish and maintain a revolving loan fund for group homes for recovering substance abusers, and improve referrals to treatment. “Designated states” must provide early intervention services for HIV-positive substance abusers. States must also make it unlawful for any manufacturer, retailer, or distributor of tobacco products to sell or distribute any such product to persons under the age of 18; enforce the law by unannounced, random inspections; and substantially meet target inspection failure rates negotiated with the Secretary of Health and Human Services. Federal spending for fiscal year 1997 was about $1.3 billion. Awards ranged from $70,000 to $181 million. Maintenance of fiscal effort requirements apply, and failure to maintain effort may result in the reduction of a state’s allotment by an equal amount. A state’s grant may be suspended or terminated for material noncompliance with conditions required for the receipt of the grant. States that fail to comply with the tobacco requirement face possible loss of 10 to 40 percent of their award. The state annual report includes a description of services provided; information on needs and treatment capacity, entities funded, and amounts expended per activity; and a statement of progress toward reaching objectives identified for the year. It must include outcome data on under-18 tobacco enforcement activities. Work has begun on identifying data for prevention activities. SAMHSA collects national and state-level data on provider organizations (however funded), services, resources, and clients. It also supports a national survey of substance abuse among the general population. There have been some state-level evaluations of this program, but no national evaluation. SAMHSA has evaluated the effectiveness of publicly funded prevention strategies and treatment methods. It has also developed treatment and prevention protocols and disseminated them through technical assistance activities. To assist state and local transportation development and improvement. Helps fund state and local activities and projects. Permits a wide array of transportation projects, including construction, mitigation of environmental damage, transit, carpool projects, and bicycle and pedestrian facilities. Funds cannot be used for local roads and rural minor collectors. Once the funds are distributed to the state, each state must set aside 10 percent for safety construction activities (i.e., hazard elimination and rail-highway crossings) and 10 percent for transportation enhancements, which encompass a broad range of environmental-related activities. The state must divide 50 percent (62.5 percent of the remaining 80 percent) of the funds by population between each of its areas over 200,000 and the remaining areas of the state. The remaining 30 percent (37.5 percent of the remaining 80 percent) can be used in any area of the state. Federal spending in fiscal year 1997 was $3.9 billion. In general, the federal share is 80 percent, and the state share is 20 percent. For interstate highway projects, the federal share ranges from 86.5 to 90.7 percent. Each state must receive at least 90 percent of every dollar it is estimated to have contributed to the Highway Account of the Highway Trust Fund. States can transfer funds from other transportation formula grants to STP. States are required to contract 10 percent of funds with small businesses owned by disadvantaged persons. The 1991 authorizing legislation contained incentives and sanctions, many of which were rescinded, including those pertaining to national speed limits and motorcycle helmet laws, in 1995. Sanctions for states that fail to have a mandatory seat belt law remain, but DOT has waived penalties for states that meet an alternative standard. For large projects of over $1 billion, comparisons of accomplishments with objectives, including explanations for slippages, cost overruns, or high unit costs, are reported. Where output can be quantified, a computation of cost per unit of output may be required. Financial information is compiled on individual projects as well as the overall program. DOT reports aggregated transportation information collected from national surveys, other federal agencies, states, state subdivisions, and private entities. These data are compiled into basic layers of information: facilities data (the location and connectivity of transportation facilities); service data (carrier locations and services provided); flow data (freight, weight, and vehicular movement); geographic and economic context data; and data on consequences of transportation, such as safety, energy use, and environmental impacts. No national programwide evaluations of this program have been conducted. Projects and particular program components have been evaluated. To assist state and local education agencies in the reform of elementary and secondary education. Provides funds to support local education activities. The grant award is administered by the state. However, not less than 85 percent of funds are distributed by formula to LEAs. Responsibility for program design and implementation rests with local educational agencies and school personnel. SEAs are expressly prohibited from influencing LEAs’ decisions regarding use of funds, and state oversight is generally restricted to reviewing compliance and fiscal accountability. Funds can be used for local projects and programs in eight broad areas: technology-based reform approaches; acquisition and use of instructional materials; education reform projects, including magnet schools; programs to improve higher order thinking skills among disadvantaged students; adult and student literacy programs; programs for the gifted and talented; school reform programs consistent with the Goals 2000 Educate America Act; and school improvement programs related to the federal title I program of education for the disadvantaged. LEAs have complete discretion in allocating funds across the allowable activities. If its service area includes private schools, the LEA must ascertain whether those schools wish to participate, and if so, it must ensure the equitable participation of private school students. The fiscal year 1997 appropriation was $310 million. In the prior year, with a total of $275 million, amounts per state ranged from about $1.4 million to $32 million. In 1991-92, the appropriation of $450 million constituted less than 0.5 percent of state education budgets. At that level, the median award for small districts was $5,200; for very large districts, the median was $360,000. Maintenance of fiscal effort and supplement, not supplant, provisions apply. SEAs must report biennially on use of funds, type of services provided, and number of children served. LEAs must provide the state with the information required for fiscal audit and program evaluation. There have been two national evaluation studies of this program, with reports in 1986 and 1994. Both focused on program implementation. States have also conducted evaluations in past years and must again evaluate the effectiveness of statewide and local programs in fiscal year 1998. The Department of Education’s nonregulatory guidance encourages LEAs to use approaches that are consistent with principles of effectiveness established through research. To assist in financing the acquisition, construction, leasing, planning, and improvement of facilities and equipment for use in mass transportation service and the payment of operating expenses to improve or continue mass transport. Provides funds to support public and private mass transportation projects in urbanized areas of over 50,000. Key decisionmaking rests with designated public transit entities or with the governor, depending on the size of the area’s population. Funds can be used for transit projects for urbanized areas of 50,000 or more people. All major transit capacity expansions must be preceded by a major investment study that justifies projects based upon a comprehensive review of its mobility improvements, environmental benefits, cost-effectiveness, and operating efficiencies. Funded projects must be included in the urbanized area’s transportation improvement program and the state transportation improvement program and approved by FTA and FHWA. Federal spending in fiscal year 1997 was about $2 billion. The federal share ranges from 50 to 90 percent, depending upon the type of activity supported. Authorizing legislation allows for the transfer of funds among various transit and highway transportation programs. Program income cannot be used to refund or reduce the local share of the grant from which it was earned, but may be used for the local share of other transit projects. Transit authorities or states are required to provide milestone, financial, and final project reports and to report significant events that affect the schedule, costs, capacity, or usefulness of funded activities. Milestone reports track performance in terms of goals, reasons for slippage or high unit costs, and outcomes stated in terms of costs per unit. At least every 3 years, the Secretary of Transportation reviews and evaluates the performance of the recipient in carrying out the program, including the extent to which program activities are consistent with proposed activities and the planning process required. All grant recipients must maintain and report systemwide financial and operating information on a quarterly basis. DOT maintains a reporting system, by uniform categories, to accumulate mass transportation financial and operating data. Information includes service descriptions, ridership information, expenditure data, information on funding, descriptions of fleet size and composition, and counts of revenue miles and hours. Outcome measures include uniform calculations of service efficiency, cost efficiency, and service effectiveness. Program Evaluation: Agencies Challenged by New Demand for Information on Program Results (GAO/GGD-98-53, Apr. 24, 1998). Performance Measurement and Evaluation: Definitions and Relationships (GAO/GGD-98-26, April 1998). Balancing Flexibility and Accountability: Grant Program Design in Education and Other Areas (GAO/T-GGD/HEHS-98-96, Feb. 11, 1998). Federal Education Funding: Multiple Programs and Lack of Data Raise Efficiency and Effectiveness Concerns (GAO/T-HEHS-98-46, Nov. 6, 1997). The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven (GAO/GGD-97-109, June 2, 1997). Managing for Results: Analytic Challenges in Measuring Performance (GAO/HEHS/GGD-97-138, May 30, 1997). Federal Grants: Design Improvements Could Help Federal Resources Go Further (GAO/AIMD-97-7, Dec. 18, 1996). Safe and Drug-Free Schools: Balancing Accountability With State and Local Flexibility (GAO/HEHS-98-3, Oct. 10, 1997). Block Grants: Issues in Designing Accountability Provisions (GAO/AIMD-95-226, Sept. 1, 1995). Block Grants: Characteristics, Experience, and Lessons Learned (GAO/HEHS-95-74, Feb. 9, 1995). Program Evaluation: Improving the Flow of Information to the Congress (GAO/PEMD-95-1, Jan. 30, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO conducted a comparative study of block grants and similar programs that give state or local governments substantial flexibility in determining how funds are to be used, focusing on: (1) examining the design characteristics of these programs that have implications for flexibility, accountability, and programs' ability to collect information about performance as envisioned in the Government Performance and Results Act; (2) identifying the kinds and sources of performance information that programs with various characteristics have utilized and the strengths and weaknesses of this information; and (3) providing guidance to legislators and agency officials concerning the information collection options available for programs with various designs. GAO noted that: (1) flexible grants are an adaptable policy tool and are found in fields from urban transit to community mental health; (2) flexible grant programs vary greatly in the kind and degree of flexibility afforded to state or local entities, distribution of accountability across levels of government, and availability of direct measures of program performance; (3) program variation reflects differences in three key design features: (a) whether national objectives for the grant are primarily performance-oriented or primarily fiscal; (b) whether the grant funds a distinct program with its own operating structure or contributes to the stream of funds supporting state or local activities; and (c) whether it supports a single major activity or diverse activities; (4) flexibility is narrowest, but accountability to the federal level clearest, in programs that focus on a single major activity and pursue national performance objectives through a distinct operating structure; (5) flexibility is broadest in programs designed with the fiscal objective of adding to the stream of funds supporting diverse state or local activities; (6) in these broadly flexible programs, the federal agency's role is limited to providing funds; (7) program direction and accountability are assigned to the state level; (8) design features also have implications for the availability of performance information; (9) although most reported simple activity or client counts, relatively few flexibility programs collected uniform data on the outcomes of state or local service activities; (10) collecting such data requires conditions that do not exist under many flexible program designs, and even where overall performance of a state or local program can be measured, the amount attributable to federal funding often cannot be separated out; (11) accordingly, flexible programs drew on other sources to obtain an overall picture of performance; (12) understanding grant design features and their implications can assist policymakers in applying the Results Act and in designing or redesigning grant programs; and (13) considering a particular program's national purpose, the federal agency role, and prospects for measuring performance attributable to the program can help agency officials and policy makers understand what program-generated information on results they can realistically expect and when alternative sources of information will be needed. |
IAEA is an independent organization affiliated with the United Nations. Its governing bodies include the General Conference, composed of representatives of the 138 IAEA member states, and the 35-member Board of Governors, which provides overall policy direction and oversight. The Secretariat, headed by the Director General, is responsible for implementing the policies and programs of the General Conference and Board of Governors. The United States is a permanent member of the Board of Governors. IAEA derives its authority to establish and administer safeguards from its statute, the Treaty on the Non-proliferation of Nuclear Weapons and regional nonproliferation treaties, bilateral commitments between states, and project agreements with states. Since the NPT came into force in 1970, it has been subject to review by signatory states every 5 years. The 1995 NPT Review and Extension conference extended the life of the treaty indefinitely, and the latest review conference occurred in May 2005. Article III of the NPT binds each of the treaty’s 184 signatory states that had not manufactured and exploded a nuclear device prior to January 1, 1967 (referred to in the treaty as non-nuclear weapon states) to conclude an agreement with IAEA that applies safeguards to all source and special nuclear material in all peaceful nuclear activities within the state’s territory, under its jurisdiction, or carried out anywhere under its control. The five nuclear weapons states that are parties to the NPT—China, France, the Russian Federation, the United Kingdom, and the United States—are not obligated by the NPT to accept IAEA safeguards. However, each nuclear weapons state has voluntarily entered into legally binding safeguards agreements with IAEA, and has submitted designated nuclear materials and facilities to IAEA safeguards to demonstrate to the non- nuclear weapon states their willingness to share in the administrative and commercial costs of safeguards. (App. I lists states that are subject to safeguards, as of August 2006.) India, Israel, and Pakistan are not parties to the NPT or other regional nonproliferation treaties. India and Pakistan are known to have nuclear weapons programs and to have detonated several nuclear devices during May 1998. Israel is also believed to have produced nuclear weapons. Additionally, North Korea joined the NPT in 1985 and briefly accepted safeguards in 1992 and 1993, but expelled inspectors and threatened to withdraw from the NPT when IAEA inspections uncovered evidence of undeclared plutonium production. North Korea announced its withdrawal from the NPT in early 2003, which under the terms of the treaty, terminated its comprehensive safeguards agreement. IAEA’s safeguards objectives, as traditionally applied under comprehensive safeguards agreements, are to account for the amount of a specific type of material necessary to produce a nuclear weapon, and the time it would take a state to divert this material from peaceful use and produce a nuclear weapon. IAEA attempts to meet these objectives by using a set of activities by which it seeks to verify that nuclear material subject to safeguards is not diverted to nuclear weapons or other proscribed purposes. For example, IAEA inspectors visit a facility at certain intervals to ensure that any diversion of nuclear material is detected before a state has had time to produce a nuclear weapon. IAEA also uses material-accounting measures to verify quantities of nuclear material declared to the agency and any changes in the quantities over time. Additionally, containment measures are used to control access to and the movement of nuclear material. Finally, IAEA deploys surveillance devices, such as video cameras, to detect the movements of nuclear material and discourage tampering with IAEA’s containment measures. The Nuclear Suppliers Group was established in 1975 after India tested a nuclear explosive device. In 1978, the Suppliers Group published its first set of guidelines governing the exports of nuclear materials and equipment. These guidelines established several requirements for Suppliers Group members, including the acceptance of IAEA safeguards at facilities using controlled nuclear-related items. In 1992, the Suppliers Group broadened its guidelines by requiring countries receiving nuclear exports to agree to IAEA’s safeguards as a condition of supply. As of August 2006, the Nuclear Suppliers Group had 45 members, including the United States. (See app. II for a list of signatory countries.) IAEA has taken steps to strengthen safeguards by more aggressively seeking assurances that a country is not pursuing a clandestine nuclear program. In a radical departure from past practices of only verifying the peaceful use of a country’s declared nuclear material at declared facilities, IAEA has begun to develop the capability to independently evaluate all aspects of a country’s nuclear activities. The first strengthened safeguards steps, which began in the early 1990s, increased the agency’s ability to monitor declared and undeclared activities at nuclear facilities. These measures were implemented under the agency’s existing legal authority under comprehensive safeguards agreements and include (1) conducting short notice and unannounced inspections, (2) collecting and analyzing environmental samples to detect traces of nuclear material, and (3) using measurement and surveillance systems that operate unattended and can be used to transmit data about the status of nuclear materials directly to IAEA headquarters. The second series of steps began in 1997 when IAEA’s Board of Governors approved the Additional Protocol. Under the Additional Protocol, IAEA has the right, among other things, to (1) receive more comprehensive information about a country’s nuclear activities, such as research and development activities, and (2) conduct “complementary access,” which enables IAEA to expand its inspection rights for the purpose of ensuring the absence of undeclared nuclear material and activities. Because the Additional Protocol broadens IAEA’s authority and the requirements on countries under existing safeguards agreements, each country must take certain actions to bring it into force. For each country with a safeguards agreement, IAEA independently evaluates all information available about the country’s nuclear activities and draws conclusions regarding a country’s compliance with its safeguards commitments. A major source of information available to the agency is data submitted by countries to IAEA under their safeguards agreements, referred to as state declarations. Countries are required to provide an expanded declaration of their nuclear activities within 180 days of bringing the Additional Protocol into force. Examples of information provided in an Additional Protocol declaration include the manufacturing of key nuclear-related equipment; research and development activities related to the nuclear fuel cycle; the use and contents of buildings on a nuclear site; and the location and operational status of uranium mines. The agency uses the state declarations as a starting point to determine if the information provided by the country is consistent and accurate with all other information available based on its own review. IAEA uses various types of information to verify the state declaration. Inspections of nuclear facilities and other locations with nuclear material are the cornerstone of the agency’s data collection efforts. Under the Additional Protocol, IAEA has the authority to conduct complementary access at any place on a site or other location with nuclear material in order to ensure the absence of undeclared nuclear material and activities, confirm the decommissioned status of facilities where nuclear material was used or stored, and resolve questions or inconsistencies related to the correctness and completeness of the information provided by a country on activities at other declared or undeclared locations. During complementary access, IAEA inspectors may carry out a number of activities, including (1) making visual observations, (2) collecting environmental samples, (3) using radiation detection equipment and measurement devices, and (4) applying seals. In 2004, IAEA conducted 124 complementary access in 27 countries. In addition to its verification activities, IAEA uses other sources of information to evaluate countries’ declarations. These sources include information from the agency’s internal databases, open sources, satellite imagery, and outside groups. The agency established two new offices within the Department of Safeguards to focus primarily on open source and satellite imagery data collection. Analysts use Internet searches to acquire information generally available to the public from open sources, such as scientific literature, trade and export publications, commercial companies, and the news media. In addition, the agency uses commercially available satellite imagery to supplement the information it receives through its open source information. Satellite imagery is used to monitor the status and condition of declared nuclear facilities and verify state declarations of certain sites. The agency also uses its own databases, such as those for nuclear safety, nuclear waste, and technical cooperation, to expand its general knowledge about countries’ nuclear and nuclear- related activities. In some cases, IAEA receives information from third parties, including other countries. Department of State and IAEA officials told us that strengthened safeguards measures have successfully revealed previously undisclosed nuclear activities in Iran, South Korea, and Egypt. Specifically, IAEA and Department of State officials noted that strengthened safeguards measures, such as collecting and analyzing environmental samples, helped the agency verify some of Iran’s nuclear activities. The measures also allowed IAEA to conclude in September 2005 that Iran was not complying with its safeguards obligations because it failed to report all of its nuclear activities to IAEA. As a result, in July 2006, Iran was referred to the U.N. Security Council, which in turn demanded that Iran suspend its uranium enrichment activities or face possible diplomatic and economic sanctions. In August 2004, as a result of preparations to submit its initial declaration under the Additional Protocol, South Korea notified IAEA that it had not previously disclosed nuclear experiments involving the enrichment of uranium and plutonium separation. IAEA sent a team of inspectors to South Korea to investigate this case. In November 2004, IAEA’s Director General reported to the Board of Governors that although the quantities of nuclear material involved were not significant, the nature of the activities and South Korea’s failure to report these activities in a timely manner posed a serious concern. IAEA is continuing to verify the correctness and completeness of South Korea’s declarations. IAEA inspectors have investigated evidence of past undeclared nuclear activities in Egypt based on the agency’s review of open source information that had been published by current and former Egyptian nuclear officials. Specifically, in late 2004, the agency found evidence that Egypt had engaged in undeclared activities at least 20 years ago by using small amounts of nuclear material to conduct experiments related to producing plutonium and highly enriched uranium. In January 2005, the Egyptian government announced that it was fully cooperating with IAEA and that the matter was limited in scope. IAEA inspectors have made several visits to Egypt to investigate this matter. IAEA’s Secretariat reported these activities to its Board of Governors. Despite these successes, a group of safeguards experts recently cautioned that a determined country can still conceal a nuclear weapons program. IAEA faces a number of limitations that impact its ability to draw conclusions—with absolute assurance—about whether a country is developing a clandestine nuclear weapons program. For example, IAEA does not have unfettered inspection rights and cannot make visits to suspected sites anywhere at any time. According to the Additional Protocol, complementary access to resolve questions related to the correctness and completeness of the information provided by the country or to resolve inconsistencies must usually be arranged with at least 24- hours advanced notice. Complementary access to buildings on sites where IAEA inspectors are already present are usually conducted with a 2-hour advanced notice. Furthermore, IAEA officials told us that there are practical problems that restrict access. For example, inspectors must be issued a visa to visit certain countries, a process which cannot normally be completed in less than 24 hours. In some cases, nuclear sites are in remote locations and IAEA inspectors need to make travel arrangements, such as helicopter transportation, in advance, which requires that the country be notified prior to the visit. A November 2004 study by a group of safeguards experts appointed by IAEA’s Director General evaluated the agency’s safeguards program to examine how effectively and efficiently strengthened safeguards measures were being implemented. Specifically, the group’s mission was to evaluate the progress, effectiveness, and impact of implementing measures to enhance the agency’s ability to draw conclusions about the non-diversion of nuclear material placed under safeguards and, for relevant countries, the absence of undeclared nuclear material and activities. The group concluded that generally IAEA had done a very good job implementing strengthened safeguards despite budgetary and other constraints. However, the group noted that IAEA’s ability to detect undeclared activities remains largely untested. If a country decides to divert nuclear material or conduct undeclared activities, it will deliberately work to prevent IAEA from discovering this. Furthermore, IAEA and member states should be clear that the conclusions drawn by the agency cannot be regarded as absolute. This view has been reinforced by the former Deputy Director General for Safeguards who has stated that even for countries with strengthened safeguards in force, there are limitations on the types of information and locations accessible to IAEA inspectors. There are a number of weaknesses that hamper IAEA’s ability to effectively implement strengthened safeguards. IAEA has only limited information about the nuclear activities of 4 key countries that are not members of the NPT—India, Israel, North Korea, and Pakistan. India, Israel, and Pakistan have special agreements with IAEA that limit the agency’s activities to monitoring only specific material, equipment, and facilities. However, since these countries are not signatories to the NPT, they do not have comprehensive safeguards agreements with IAEA, and are not required to declare all of their nuclear material to the agency. In addition, these countries are only required to declare exports of nuclear material previously declared to IAEA. With the recent revelations of the illicit international trade in nuclear material and equipment, IAEA officials stated that they need more information on these countries’ nuclear exports. For North Korea, IAEA has even less information, since the country expelled IAEA inspectors and removed surveillance equipment at nuclear facilities in December 2002 and withdrew from the NPT in January 2003. These actions have raised widespread concern that North Korea diverted some of its nuclear material to produce nuclear weapons. Another major weakness is that more than half, or 111 out of 189, of the NPT signatories have not yet brought the Additional Protocol into force, as of August 2006. (App. I lists the status of countries’ safeguards agreements with IAEA). Without the Additional Protocol, IAEA must limit its inspection efforts to declared nuclear material and facilities, making it harder to detect clandestine nuclear programs. Of the 111 countries that have not adopted the Additional Protocol, 21 are engaged in significant nuclear activities, including Egypt, North Korea, and Syria. In addition, safeguards are significantly limited or not applied in about 60 percent, or 112 out of 189, of the NPT signatory countries—either because they have an agreement (known as a small quantities protocol) with IAEA, and are not subject to most safeguards measures, or because they have not concluded a comprehensive safeguards agreement with IAEA. Countries with small quantities of nuclear material make up about 41 percent of the NPT signatories and about one-third of the countries that have the Additional Protocol in force. Since 1971, IAEA’s Board of Governors has authorized the Director General to conclude an agreement, known as a small quantities protocol, with 90 countries and, as of August 2006, 78 of these agreements were in force. IAEA’s Board of Governors has approved the protocols for these countries without having IAEA verify that they met the requirements for it. Even if these countries bring the Additional Protocol into force, IAEA does not have the right to conduct inspections or install surveillance equipment at certain nuclear facilities. According to IAEA and Department of State officials, this is a weakness in the agency’s ability to detect clandestine nuclear activities or transshipments of nuclear material and equipment through the country. In September 2005, the Board of Governors directed IAEA to negotiate with countries to make changes to the protocols, including reinstating the agency’s right to conduct inspections. As of August 2006, IAEA amended the protocols for 4 countries—Ecuador, Mali, Palau, and Tajikistan. The application of safeguards is further limited because 31 countries that have signed the NPT have not brought into force a comprehensive safeguards agreement with IAEA. The NPT requires non-nuclear weapons states to conclude comprehensive safeguards agreements with IAEA within 18 months of becoming a party to the Treaty. However, IAEA’s Director General has stated that these 31 countries have failed to fulfill their legal obligations. Moreover, 27 of the 31 have not yet brought comprehensive safeguards agreements into force more than 10 years after becoming party to the NPT, including Chad, Kenya, and Saudi Arabia. Last, IAEA is facing a looming human capital crisis that may hamper the agency’s ability to meet its safeguards mission. In 2005, we reported that about 51 percent, or 38 out of 75, of IAEA’s senior safeguards inspectors and high-level management officials, such as the head of the Department of Safeguards and the directors responsible for overseeing all inspection activities of nuclear programs, are retiring in the next 5 years. According to U.S. officials, this significant loss of knowledge and expertise could compromise the quality of analysis of countries’ nuclear programs. For example, several inspectors with expertise in uranium enrichment techniques, which is a primary means to produce nuclear weapons material, are retiring at a time when demand for their skills in detecting clandestine nuclear activities is growing. While IAEA has taken a number of steps to address these human capital issues, officials from the Department of State and the U.S. Mission to the U.N. System Organizations in Vienna have expressed concern that IAEA is not adequately planning to replace staff with critical skills needed to fulfill its strengthened safeguards mission. The Nuclear Suppliers Group, along with other multilateral export control groups, has helped stop, slow, or raise the costs of nuclear proliferation, according to nonproliferation experts. For example, as we reported in 2002, the Suppliers Group helped convince Argentina and Brazil to accept IAEA safeguards on their nuclear programs in exchange for expanded access to international cooperation for peaceful nuclear purposes. The Suppliers Group, along with other multilateral export control groups, has significantly reduced the availability of technology and equipment available to countries of concern, according to a State Department official. Moreover, nuclear export controls have made it more difficult, more costly, and more time consuming for proliferators to obtain the expertise and material needed to advance their nuclear program. The Nuclear Suppliers Group has also helped IAEA verify compliance with the NPT. In 1978, the Suppliers Group published the first guidelines governing exports of nuclear materials and equipment. These guidelines established several member requirements, including the requirement that members adhere to IAEA safeguards standards at facilities using controlled nuclear-related items. Subsequently, in 1992, the Nuclear Suppliers Group broadened its guidelines by requiring that members insist that non-member states have IAEA safeguards on all nuclear material and facilities as a condition of supply for their nuclear exports. With the revelation of Iraq’s nuclear weapons program, the Suppliers Group also created an export control system for dual-use items that established new controls for items that did not automatically fall under IAEA safeguards requirements. Despite these benefits, there are a number of weaknesses that could limit the Nuclear Suppliers Group’s ability to curb nuclear proliferation. Members of the Suppliers Group do not share complete export licensing information. Specifically, members do not always share information about licenses they have approved or denied for the sale of controversial items to nonmember states. Without this shared information, a member country could inadvertently license a controversial item to a country that has already been denied a license from another Suppliers Group member state. Furthermore, Suppliers Group members did not promptly review and agree upon common lists of items to control and approaches to controlling them. Each member must make changes to its national export control policies after members agree to change items on the control list. If agreed- upon changes to control lists are not adopted at the same time by all members, proliferators could exploit these time lags to obtain sensitive technologies by focusing on members that are slowest to incorporate the changes and sensitive items may still be traded to countries of concern. In addition, there are a number of obstacles to efforts aimed at strengthening the Nuclear Suppliers Group and other multilateral export control regimes. First, efforts to strengthen export controls have been hampered by a requirement that all members reach consensus about every decision made. Under the current process, a single member can block new reforms. U.S. and foreign government officials and nonproliferation experts all stressed that the regimes are consensus-based organizations and depend on the like-mindedness or cohesion of their members to be effective. However, members have found it especially difficult to reach consensus on such issues as making changes to procedures and control lists. The Suppliers Group reliance on consensus decision making will be tested by the United States request to exempt India from the Suppliers Group requirements to accept IAEA safeguards at all nuclear facilities. Second, since membership with the Suppliers Group is voluntary and nonbinding, there are no means to enforce compliance with members’ nonproliferation commitments. For example, the Suppliers Group has no direct means to impede Russia’s export of nuclear fuel to India, an act that the U.S. government said violated Russia’s commitment. Third, the rapid pace of nuclear technological change and the growing trade of sensitive items among proliferators complicate efforts to keep control lists current because these lists need to be updated more frequently. To help strengthen these regimes, GAO recommended in October 2002, that the Secretary of State establish a strategy that includes ways for Nuclear Suppliers Group members to improve information sharing, implement changes to export controls more consistently, and identify organizational changes that could help reform its activities. As of June 2006, the Nuclear Suppliers Group announced that it has revised its guidelines to improve information sharing. However, despite our recommendation, it has not yet agreed to share greater and more detailed information on approved exports of sensitive transfers to nonmember countries. Nevertheless, the Suppliers Group is examining changes to its procedures that assist IAEA’s efforts to strengthen safeguards. For example, at the 2005 Nuclear Suppliers Group plenary meeting, members discussed changing the requirements for exporting nuclear material and equipment by requiring nonmember countries to adopt IAEA’s Additional Protocol as a condition of supply. If approved by the Suppliers Group, the action would complement IAEA’s efforts to verify compliance with the NPT. Reducing the formidable proliferation risks posed by former Soviet weapons of mass destruction (WMD) assets is a U.S. national security interest. Since the fall of the Soviet Union, the United States, through a variety of programs, managed by the Departments of Energy, Defense (DOD), and State, has helped Russia and other former Soviet countries to secure nuclear material and warheads, detect illicitly trafficked nuclear material, eliminate excess stockpiles of weapons-usable nuclear material, and halt the continued production of weapons-grade plutonium. From fiscal year 1992 through fiscal year 2006, the Congress appropriated about $7 billion for nuclear nonproliferation efforts. However, U.S. assistance programs have faced a number of challenges, such as a lack of access to key sites and corruption of foreign officials, which could compromise the effectiveness of U.S. assistance. DOE’s Material Protection, Control, and Accounting (MPC&A) program has worked with Russia and other former Soviet countries since 1994 to provide enhanced physical protection systems at sites with weapons- usable nuclear material and warheads, implement material control and accounting upgrades to help keep track of the quantities of nuclear materials at sites, and consolidate material into fewer, more secure buildings. GAO last reported on the MPC&A program in 2003. At that time, a lack of access to many sites in Russia’s nuclear weapons complex had significantly impeded DOE’s progress in helping Russia to secure its nuclear material. We reported that DOE had completed work at only a limited number of buildings in Russia’s nuclear weapons complex, a network of sites involved in the construction of nuclear weapons where most of the nuclear material in Russia is stored. According to DOE, by the end of September 2006, the agency will have helped to secure 175 buildings with weapons-usable nuclear material in Russia and the former Soviet Union and 39 Russian Navy nuclear warhead sites. GAO is currently re-examining DOE’s efforts, including the progress DOE has made since 2003 in securing nuclear material and warheads in Russia and other countries and the challenges DOE faces in completing its work. While securing nuclear materials and warheads where they are stored is considered to be the first layer of defense against nuclear theft, there is no guarantee that such items will not be stolen or lost. Recognizing this fact, DOE, DOD, and State, through seven different programs, have provided radiation detection equipment since 1994 to 36 countries, including many countries of the former Soviet Union. These programs seek to combat nuclear smuggling and are seen as a second line of defense against nuclear theft. The largest and most successful of these efforts is DOE’s Second Line of Defense program (SLD). We reported in March 2006 that, through the SLD program, DOE had provided radiation detection equipment and training at 83 sites in Russia, Greece, and Lithuania since 1998. However, we also noted that U.S. radiation detection assistance efforts faced challenges, including corruption of some foreign border security officials, technical limitations of some radiation detection equipment, and inadequate maintenance of some equipment. To address these challenges, U.S. agencies plan to take a number of steps, including combating corruption by installing communications links between individual border sites and national command centers so that detection alarm data can be simultaneously evaluated by multiple officials. The United States is also helping Russia to eliminate excess stockpiles of nuclear material (highly enriched uranium and plutonium). In February 1993, the United States agreed to purchase from Russia 500 metric tons of highly enriched uranium (HEU) extracted from dismantled Russian nuclear weapons over a 20-year period. Russia agreed to dilute, or blend- down, the material into low enriched uranium (LEU), which is of significantly less proliferation risk, so that it could be made into fuel for commercial nuclear power reactors before shipping it to the United States. As of June 27, 2006, 276 metric tons of Russian HEU—derived from more than 11,000 dismantled nuclear weapons—have been downblended into LEU for use in U.S. commercial nuclear reactors. Similarly, in 2000, the United States and Russia committed to the transparent disposition of 34 metric tons each of weapon-grade plutonium. The plutonium will be converted into a more proliferation-resistant form called mixed-oxide (MOX) fuel that will be used in commercial nuclear power plants. In addition to constructing a MOX fuel fabrication plant at its Savannah River Site, DOE is also assisting Russia in constructing a similar facility for the Russian plutonium. Russia’s continued operation of three plutonium production reactors poses a serious proliferation threat. These reactors produce about 1.2 metric tons of plutonium each year—enough for about 300 nuclear weapons. DOE’s Elimination of Weapons-Grade Plutonium Production program seeks to facilitate the reactors’ closure by building or refurbishing two fossil fuel plants that will replace the heat and electricity that will be lost with the shutdown of Russia’s three plutonium production reactors. DOE plans to complete the first of the two replacement plants in 2008 and the second in 2011. When we reported on this program in June 2004, we noted that DOE faced challenges in implementing its program, including ensuring Russia’s commitment to shutting down the reactors, the rising cost of building the replacement fossil fuel plants, and concerns about the thousands of Russian nuclear workers who will lose their jobs when the reactors are shut down. We made a number of recommendations, which DOE has implemented, including reaching agreement with Russia on the specific steps to be taken to shut down the reactors and development of a plan to work with other U.S. government programs to assist Russia in finding alternate employment for the skilled nuclear workers who will lose their jobs when the reactors are shut down. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or other Members of the Subcommittee may have at this time. For future contacts regarding this testimony, please contact Gene Aloise at (202) 512-3841 or Joseph Christoff at (202) 512-8979. R. Stockton Butler, Miriam A. Carroll, Leland Cogliani, Lynn Cothern, Muriel J. Forster, Jeffrey Phillips, and Jim Shafer made key contributions to this testimony. Beth Hoffman León, Stephen Lord, Audrey Solis, and Pierre Toureille provided technical assistance. Although North Korea concluded a comprehensive safeguards agreement with IAEA in 1992, it announced its withdrawal from the NPT in January 2003. Secures radiological sources no longer needed in the U.S. and locates, identifies, recovers, consolidates, and enhances the security of radioactive materials outside the U.S. Global Nuclear Material Threat Reduction Eliminates Russia’s use of highly enriched uranium (HEU) in civilian nuclear facilities; returns U.S. and Russian-origin HEU and spent nuclear fuel from research reactors around the world; secures plutonium-bearing spent nuclear fuel from reactors in Kazakhstan; and addresses nuclear and radiological materials at vulnerable locations throughout the world. Provides replacement fossil-fuel energy that will allow Russia to shutdown its three remaining weapons-grade plutonium production reactors. Develops and delivers technology applications to strengthen capabilities to detect and verify undeclared nuclear programs; enhances the physical protection and proper accounting of nuclear material; and assists foreign national partners to meet safeguards commitments. Provides meaningful employment for former weapons of mass destruction weapons scientists. Provides material protection, control, and accounting upgrades to enhance the security of Navy HEU fuel and nuclear material. Provides material protection, control, and accounting upgrades to nuclear weapons, uranium enrichment, and material processing and storage sites. Enhances the security of proliferation-attractive nuclear material in Russia by supporting material protection, control, and accounting upgrade projects at Russian civilian nuclear facilities. Develops national and regional resources in the Russian Federation to help establish and sustain effective operation of upgraded nuclear material protection, control and accounting systems. Negotiates cooperative efforts with the Russian Federation and other key countries to strengthen the capability of enforcement officials to detect and deter illicit trafficking of nuclear and radiological material across international borders. This is accomplished through the detection, location and identification of nuclear and nuclear related materials, the development of response procedures and capabilities, and the establishment of required infrastructure elements to support the control of these materials. HEU Transparency Implementation project Monitors Russia to ensure that low enriched uranium (LEU) sold to the U.S. for civilian nuclear power plants is derived from weapons-usable HEU removed from dismantled Russian nuclear weapons. Disposes of surplus domestic HEU by down-blending it. Surplus U.S. Plutonium Disposition project Disposes of surplus domestic plutonium by fabricating it into mixed oxide (MOX) fuel for irradiation in existing, commercial nuclear reactors. Supports Russia’s efforts to dispose of its weapons-grade plutonium by working with the international community to help pay for Russia’s program. Provides training and equipment to assist Russia in determining the reliability of its guard forces. Enhances the safety and security of Russian nuclear weapons storage sites through the use of vulnerability assessments to determine specific requirements for upgrades. DOD will develop security designs to address those vulnerabilities and install equipment necessary to bring security standards consistent with those at U.S. nuclear weapons storage facilities. Nuclear Weapons Transportation Assists Russia in shipping nuclear warheads to more secure sites or dismantlement locations. Assists Russia in maintaining nuclear weapons cargo railcars. Funds maintenance of railcars until no longer feasible, then purchases replacement railcars to maintain 100 cars in service. DOD will procure 15 guard railcars to replace those retired from service. Guard railcars will be capable of monitoring security systems in the cargo railcars and transporting security force personnel. Provides emergency response vehicles containing hydraulic cutting tools, pneumatic jacks, and safety gear to enhance Russia’s ability to respond to possible accidents in transporting nuclear weapons. Meteorological, radiation detection and monitoring, and communications equipment is also included. Combating Nuclear Smuggling: Challenges Facing U.S. Efforts to Deploy Radiation Detection Equipment in Other Countries and in the United States. GAO-06-558T. Washington, D.C.: March 28, 2006. Combating Nuclear Smuggling: Corruption, Maintenance, and Coordination Problems Challenge U.S. Efforts to Provide Radiation Detection Equipment to Other Countries. GAO-06-311. Washington, D.C.: March 14, 2006. Nuclear Nonproliferation: IAEA Has Strengthened Its Safeguards and Nuclear Security Programs, but Weaknesses Need to Be Addressed. GAO- 06-93. Washington, D.C.: October 7, 2005. Preventing Nuclear Smuggling: DOE Has Made Limited Progress in Installing Radiation Detection Equipment at Highest Priority Foreign Seaports. GAO-05-375. Washington, D.C.: March 31, 2005. Nuclear Nonproliferation: DOE’s Effort to Close Russia’s Plutonium Production Reactors Faces Challenges, and Final Shutdown is Uncertain. GAO-04-662. Washington, D.C.: June 4, 2004. Weapons of Mass Destruction: Additional Russian Cooperation Needed to Facilitate U.S. Efforts to Improve Security at Russian Sites. GAO-03- 482. Washington, D.C.: March 24, 2003. Nonproliferation: Strategy Needed to Strengthen Multilateral Export Control Regimes. GAO-03-43. Washington, D.C.: October 25, 2002. Nuclear Nonproliferation: U.S. Efforts to Help Other Countries Combat Nuclear Smuggling Need Strengthened Coordination and Planning. GAO-02-426. Washington, D.C.: May 16, 2002. Nuclear Nonproliferation: Implications of the U.S. Purchase of Russian Highly Enriched Uranium. GAO-01-148. Washington, D.C.: December 15, 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The International Atomic Energy Agency's (IAEA) safeguards system has been a cornerstone of U.S. efforts to prevent nuclear weapons proliferation since the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) was adopted in 1970. Safeguards allow IAEA to verify countries' compliance with the NPT. Since the discovery in 1991 of a clandestine nuclear weapons program in Iraq, IAEA has strengthened its safeguards system. In addition to IAEA's strengthened safeguards program, there are other U.S. and international efforts that have helped stem the spread of nuclear materials and technology that could be used for nuclear weapons programs. This testimony is based on GAO's report on IAEA safeguards issued in October 2005 (Nuclear Nonproliferation: IAEA Has Strengthened Its Safeguards and Nuclear Security Programs, but Weaknesses Need to Be Addressed, GAO-06-93 [Washington, D.C.: Oct. 7, 2005]). This testimony is also based on previous GAO work related to the Nuclear Suppliers Group--a group of more than 40 countries that have pledged to limit trade in nuclear materials, equipment, and technology to only countries that are engaged in peaceful nuclear activities--and U.S. assistance to Russia and other countries of the former Soviet Union for the destruction, protection, and detection of nuclear material and weapons. IAEA has taken steps to strengthen safeguards, including conducting more intrusive inspections, to seek assurances that countries are not developing clandestine weapons programs. IAEA has begun to develop the capability to independently evaluate all aspects of a country's nuclear activities. This is a radical departure from the past practice of only verifying the peaceful use of a country's declared nuclear material. However, despite successes in uncovering some countries' undeclared nuclear activities, safeguards experts cautioned that a determined country can still conceal a nuclear weapons program. In addition, there are a number of weaknesses that limit IAEA's ability to implement strengthened safeguards. First, IAEA has a limited ability to assess the nuclear activities of 4 key countries that are not NPT members--India, Israel, North Korea, and Pakistan. Second, more than half of the NPT signatories have not yet brought the Additional Protocol, which is designed to give IAEA new authority to search for clandestine nuclear activities, into force. Third, safeguards are significantly limited or not applied to about 60 percent of NPT signatories because they possess small quantities of nuclear material, and are exempt from inspections, or they have not concluded a comprehensive safeguards agreement. Finally, IAEA faces a looming human capital crisis caused by the large number of inspectors and safeguards management personnel expected to retire in the next 5 years. In addition to IAEA's strengthened safeguards program, there are other U.S. and international efforts that have helped stem the spread of nuclear materials and technology. The Nuclear Suppliers Group has helped to constrain trade in nuclear material and technology that could be used to develop nuclear weapons. However, there are a number of weaknesses that could limit the Nuclear Suppliers Group's ability to curb proliferation. For example, members of the Suppliers Group do not always share information about licenses they have approved or denied for the sale of controversial items to nonmember states. Without this shared information, a member country could inadvertently license a controversial item to a country that has already been denied a license from another member state. Since the early 1990s, U.S. nonproliferation programs have helped Russia and other former Soviet countries to, among other things, secure nuclear material and warheads, detect illicitly trafficked nuclear material, and eliminate excess stockpiles of weapons-usable nuclear material. However, these programs face a number of challenges which could compromise their ongoing effectiveness. For example, a lack of access to many sites in Russia's nuclear weapons complex has significantly impeded the Department of Energy's progress in helping Russia secure its nuclear material. U.S. radiation detection assistance efforts also face challenges, including corruption of some foreign border security officials, technical limitations of some radiation detection equipment, and inadequate maintenance of some equipment. |
In our December 2009 report, we found that law enforcement agencies we surveyed generally reported finding FinCEN’s services and products useful, citing direct access to BSA data, on-site liaisons, and access to financial information on people or organizations suspected of being involved in significant money laundering or terrorist financing activities— known as the 314(a) process—as those that are among the most useful. However, we found that FinCEN could (1) better inform law enforcement of the types of complex analytic products that it can provide, (2) more clearly define the types of requests for complex analytic support that it will accept, and (3) actively solicit input on the development of complex analytic products in order to help law enforcement better utilize FinCEN’s expertise and enhance the value of the products it provides to law enforcement. Finally, we found that while FinCEN has taken initial steps to more actively solicit law enforcement input on proposed regulatory actions, FinCEN lacks a mechanism to allow law enforcement agencies to submit sensitive information regarding the potential impact of proposed regulatory actions on financial crimes investigations. Law enforcement agencies cited direct access to BSA Data, the 314(a) process, and on-site liaisons as the most useful services FinCEN provides. Most law enforcement agencies responding to our survey (16 out of 20) cited direct access to BSA data as most useful and 19 out of 22 agencies responding indicated that BSA data was the FinCEN service they used most often. Liaisons from three of FinCEN’s top five federal law enforcement customers noted that direct access to the BSA database provides law enforcement a means to access these data in order to help identify, deter, and detect money laundering or other potential financial crimes related to a range of criminal activity. As a result of the Uniting and Strengthening America By Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act of 2001 (USA PATRIOT Act), FinCEN also introduced a new tool to further assist federal law enforcement agencies in their investigations of financial crimes. This tool, developed by FinCEN in response to Section 314(a) of the USA PATRIOT Act, enables federal law enforcement agencies to reach out to financial institutions across the country for potential information related to financial crimes investigations. FinCEN facilitates the 314(a) process through the use of a secure communications system. This system allows law enforcement to quickly locate financial data, such as open accounts and financial transactions related to ongoing investigations of persons, entities, or organizations suspected of being involved in significant money laundering or terrorist financing activities. Federal law enforcement agencies reported that the 314(a) process is a key service offered by FinCEN that provides case-specific and timely information to support ongoing law enforcement investigations. Specifically, all 11 federal agencies we surveyed that had a basis to judge the 314(a) process responded that it was either very or extremely helpful. Finally, law enforcement agencies reported that being able to maintain agency liaisons on-site at FinCEN is another valuable service FinCEN provides, facilitating law enforcement agency access to FinCEN’s services and products. Specifically, all 9 of the federal law enforcement agencies responding to the questionnaire that indicated they had on-site liaisons reported that it was extremely helpful. FinCEN has sought to increase its production of more complex analytic products, which law enforcement agencies report are also helpful in financial crimes investigations. As more law enforcement agencies gained the ability to directly access the BSA data and conduct their own searches of the data, their reliance on FinCEN to conduct basic queries on their behalf has decreased. We reported that from 2004 through 2007, requests to FinCEN to conduct such queries decreased 80 percent from 2,048 to 409.As a result, FinCEN has identified a need to redefine its role in supporting law enforcement agencies and to enhance the value and relevance of its analytic work. As part of this effort, in recent years FinCEN has sought to increase its production of more sophisticated complex analytic products. These products range from complex tactical case support requiring large-scale BSA data analysis, to a variety of strategic projects, studies, and trend analyses intended to identify and explain money laundering methodologies or assess threats posed by large- scale money laundering and terrorist financing activities. For example, in 2007 FinCEN provided a study to one law enforcement agency that identified currency flows between the United States and another country which helped this agency to identify potential patterns in drug trafficking. Based on responses to our survey and interviews, law enforcement agencies reported general satisfaction with FinCEN’s analytic products. For example, when asked why they requested analytic support from FinCEN, 15 out of 17 agencies that indicated they had made such requests reported that they did so because they believed FinCEN has unique expertise related to analyzing the BSA data. Additionally, liaisons from all of FinCEN’s top five federal law enforcement customers specifically highlighted technical reference manuals as one of the most useful complex analytic products FinCEN produces. FinCEN’s technical reference manuals provide practical information on a variety of issues, including how particular financial transfer or payment mechanisms may be used to launder money. FinCEN could better inform law enforcement about the types of complex analytic products it can provide and when those products become available. We reported that according to liaisons from three of FinCEN’s top five federal law enforcement customers, FinCEN does not provide detailed information about each type of product that would help law enforcement agencies to fully understand the various types of support FinCEN can provide. Senior ALD officials also acknowledged that they could clarify and better communicate to their law enforcement customers the various types of complex analytic products FinCEN can provide. In addition, in both interviews and in response to open-ended survey questions, officials from 7 of the 25 law enforcement agencies we surveyed, including three of FinCEN’s top five federal law enforcement customers, also indicated that they would like more information about when completed products become available. These liaisons noted that because FinCEN does not actively communicate with them about when completed products are available, they may not be aware of all of FinCEN’s products that could be useful in their investigations of financial crimes. Similarly, an official from one of FinCEN’s top five federal law enforcement customers noted that, in some cases, analyses FinCEN conducts for one customer might also be useful to the investigations of other financial crimes. In an internal report generated by ALD staff in August 2008, ALD officials acknowledged that law enforcement liaisons reported that they would like FinCEN to provide clear guidance on the dissemination of its products. FinCEN officials also noted that they typically observe the “third-party rule” on dissemination of information obtained from the requesting agency and, in some cases, this may limit their ability to share products that are completed in response to a request from a single customer. The rule generally provides that information properly released by one agency to another agency cannot be released by the recipient agency to a third agency without prior knowledge and consent of the agency that originally provided the information. The third-party rule applies to all data and information FinCEN receives from the agencies with which it works on a specific project. However, officials further stated that they are committed to looking for ways to better publicize FinCEN’s analytic work and will continue to do so within the framework of adequately protecting the information provided to them. While we recognize the need for FinCEN to protect sensitive information, establishing a process to clarify and communicate to law enforcement when and under what circumstances FinCEN can or will attempt to share analytic products with other law enforcement customers will help ensure that it is effectively carrying out its mission to support the investigation and prosecution of financial crimes. We recommended that FinCEN clarify and communicate to law enforcement agencies the various types of complex analytic products FinCEN can provide and establish a process for informing law enforcement agencies about the availability of these products. FinCEN agreed with our recommendation and outlined plans it would take in order to improve communication with law enforcement regarding the services, products, and capabilities FinCEN offers. In response to our report, FinCEN officials stated that they would compile an inventory of analytic products historically produced, those FinCEN should produce, and those requested by law enforcement. FinCEN officials reported that it would consult with law enforcement partners to refine its recommendations, and then categorize and describe the types of analytic products for law enforcement. In April 2010, we obtained updated information from FinCEN on the status of its efforts to address our recommendations. Specifically, FinCEN officials stated that its Office of Law Enforcement Support (OLE) created a draft “Menu of Products and Services” which is intended to clarify the types of products and services FinCEN’s analytical operation can provide. According to FinCEN officials, OLE also created a draft “Menu of Resources” which describes the data sources and other tools available to FinCEN analysts that can be utilized in the course of their analytical support operations. These officials explained that, while these documents are still in draft form, once they are finalized, they will be distributed to its law enforcement customers through FinCEN’s Secure Outreach Portal, on their intranet, and through direct and e-mail contact between FinCEN personnel and external agencies. Defining the types of requests for complex analytic support that FinCEN will accept could also help law enforcement better utilize FinCEN’s expertise in analyzing the BSA data. While FinCEN has informed law enforcement that it is now focusing the support it provides predominantly on those requests that it considers to be for complex analytic support, we found that it could better inform law enforcement about its decision-making process regarding what requests it will accept or reject. Law enforcement agencies may submit requests for complex analysis in support of specific investigations; however, in interviews with officials from FinCEN’s top five federal law enforcement customers, liaisons from two of these agencies stated that they did not fully understand what types of cases FinCEN is willing and able to support. Furthermore, in response to an open-ended survey question on FinCEN’s analytic support, officials from two other law enforcement agencies reported that they do not fully understand FinCEN’s decision-making process for accepting or rejecting requests for support. These agencies indicated that while they understand that FinCEN has limited staff and resources to dedicate to analytic support, FinCEN has not been consistent in responding to their requests for support and does not always provide explanations why specific requests were rejected. In addition, in the internal report generated by ALD staff in August 2008, ALD officials acknowledged confusion among law enforcement customers about the types of requests FinCEN will accept, as well as law enforcement agencies’ concern that FinCEN does not sufficiently explain the reasons for declining specific requests for support. Senior officials acknowledged the report’s findings and as a first step, reorganized ALD in October 2009 in order to realign resources to better meet law enforcement’s needs. For example, FinCEN officials reported that they created a new office within ALD that is responsible for providing proactive analysis of BSA data and communicating regularly with law enforcement agents in the field. The officials stated that they believe the creation of this office will allow them to leverage analytical assets and abilities across FinCEN to better inform all of their partners within the law enforcement, intelligence, regulatory, and financial communities. ALD also identified the development and implementation of processes to improve communication with its law enforcement customers as a 2010 priority. We recommended that FinCEN complete a plan, including identifying the specific actions FinCEN will take to better assess law enforcement needs, and make the division’s operations more transparent to FinCEN’s law enforcement customers. This plan should include a mechanism for FinCEN to communicate to law enforcement agencies its decision-making process for selecting complex analytic products to pursue and why FinCEN rejects a request. FinCEN agreed with our recommendation and stated that in October 2009, it began an effort to address communication with law enforcement on three levels: analytical products, workflow process, and outreach. The teams assessing workflow processes and outreach efforts will make recommendations that will include provisions for better assessment of law enforcement needs and more insight into FinCEN’s decision-making on complex analytical products. In April 2010, FinCEN officials reported that they have taken steps to collect information about law enforcement customer’s priorities, needs, and plans. For example, FinCEN officials reported plans to create a survey to capture law enforcement agencies’ specific investigative focus and needs. Furthermore, the officials stated that personnel from the Office of Law Enforcement Support working in consultation with law enforcement representatives drafted a new data collection form for documenting requests for analytic support from law enforcement. FinCEN officials also reported that they have established a process for reviewing and responding to requests and informing the requester of FinCEN’s final decision. According to FinCEN officials, once requests have been reviewed, completed forms will be scanned and retained for future reference so that requestors may be informed as to why requests were accepted or denied. Actively soliciting input on the development of complex analytic products could help FinCEN enhance their value to law enforcement agencies. While FinCEN communicates with its law enforcement customers about a variety of issues, we reported that the agency could enhance the value of its complex analytic work by more actively soliciting law enforcement’s input about ongoing or planned analytic work. In interviews with officials from FinCEN’s top five federal law enforcement customers, liaisons from all five agencies reported that FinCEN does not consistently seek their input about ongoing or planned analytic work. Four of the liaisons stated that, as a result, they do not have regular opportunities to provide FinCEN with meaningful input about what types of products would be useful to them, potentially creating a gap between the products the agency generates and the products that its law enforcement customers need and want. Similarly, three other law enforcement liaisons noted that FinCEN does not provide them with regular opportunities to make proposals regarding the types of complex analytic products FinCEN should undertake. According to FinCEN officials, while the agency primarily relies on ad hoc communication with law enforcement agencies—such as talking with law enforcement representatives located on-site, with law enforcement representatives at conferences, or with individual agents in the field—FinCEN does not have a systematic process for soliciting input from law enforcement agencies on the development of its complex analytic work. In their August 2008 internal report, ALD officials acknowledged the concerns of its law enforcement customers regarding their lack of opportunities to provide input on FinCEN’s planned complex analytic work, and that FinCEN does not always solicit or incorporate law enforcement input in the selection of these products. As a solution, the internal report recommended that the law enforcement roundtable be used as a forum to discuss proposals for analytic products with FinCEN’s law enforcement customers. While this is a productive step, relying solely on the roundtable may not allow opportunities for some of FinCEN’s other law enforcement stakeholders to provide input because the roundtable is typically only attended by federal law enforcement customers. Furthermore, not all of FinCEN’s federal law enforcement customers are able to regularly attend these meetings. FinCEN does use annual surveys and feedback forms to obtain feedback from law enforcement on the usefulness of some completed products, although these surveys and forms are not designed to obtain detailed information on the full range of services and products FinCEN provides. For example, the annual surveys do not cover other analytic products such as FinCEN’s strategic analysis reports or its technical reference guides. Actively soliciting stakeholder input and providing transparency with regard to decision making are GAO-identified best practices for effectively meeting stakeholder needs. Incorporating these best practices could help FinCEN maximize the usefulness of its support. FinCEN officials emphasized that law enforcement also has a responsibility to provide constructive input on FinCEN’s services and products. While we recognize that communication between FinCEN and its law enforcement customers is a shared responsibility, actively soliciting stakeholder input will allow FinCEN to capture stakeholder interests and better incorporate law enforcement perspectives into the development of complex analytic products. As a result, we recommended that FinCEN establish a systematic process for actively soliciting input from law enforcement agencies and incorporating this input into the selection and development of its analytic products. FinCEN agreed with this recommendation and outlined efforts it plans to undertake in response to our findings. In October 2009, according to FinCEN officials, ALD established an Office of Trend and Issue Analysis (OTI) which is to focus on the development of strategic-level analysis of Bank Secrecy Act data. FinCEN officials also reported that ALD reassigned a number of its field representatives to OLE in order to better utilize their experience and to enhance communication with law enforcement customers. Finally, FinCEN stated that it also plans to design an institutional process for collecting the kind of information required to gain broader insight into its law enforcement partners’ priorities. In providing updates on their efforts to address our recommendations, FinCEN officials stated that they are making a concerted effort to engage their law enforcement customers at a variety of o rganization levels to determine their key priorities and how FinCEN can est support their priorities and strategic goals. FinCEN has taken initial steps to more actively solicit law enforcement input on proposed regulatory actions, but lacks a mechanism for collecting sensitive information about these actions. Regulatory changes instituted by FinCEN can affect the content or structure of BSA data used in law enforcement investigations as well as law enforcement’s efforts to indict and prosecute financial crimes. However, we reported that liaisons from four of FinCEN’s top five federal law enforcement customers reported concerns that their agencies do not have sufficient opportunities to provide input when FinCEN is considering proposed regulatory changes. The internal report ALD generated in August 2008 also recognized that changes to BSA regulations have the potential to alter the kind of information that financial institutions report. The report also acknowledged federal law enforcement agencies’ concerns that FinCEN does not generally engage them in the identification and resolution of regulatory issues that might influence law enforcement operations. According to senior FinCEN officials, the agency recognizes the need to do a better job of obtaining law enforcement input on proposed regulatory changes in the future and did so in one recent case. Specifically, in developing regulations in 2009 related to stored value cards, such as prepaid debit cards and gifts cards, FinCEN held multiple meetings with representatives from its top five federal law enforcement customers specifically designed to obtain their input and provide recommendations on developing the proposed regulation. FinCEN also used the law enforcement roundtable to inform agencies about the planned regulatory changes. FinCEN’s efforts to actively solicit law enforcement input in this case are encouraging, and continuing such efforts would help ensure that law enforcement input is considered before regulatory changes are made. Once FinCEN has decided to move forward with a proposed regulatory change, it follows the process laid out in the Administrative Procedure Act (APA) for obtaining official comments on the proposal from interested stakeholders including regulators, financial institutions, and law enforcement agencies. The APA prescribes uniform standards for rulemaking and most federal rules are promulgated using the APA- established informal rulemaking process, also known as “notice and comment” rulemaking. Generally, a notice of proposed rulemaking (NPRM) is published in the Federal Register announcing an agency’s intent to promulgate a rule to the public. However, we reported that liaisons from four of FinCEN’s top five federal law enforcement customers reported that the public record is not always the most appropriate venue for providing comments on proposed regulatory changes because their comments often contain law enforcement sensitive information. According to these officials, raising these concerns in a public forum may compromise key investigative techniques or strategies used in ongoing investigations. According to FinCEN officials, at the time of our review, they did not have a systematic process for soliciting law enforcement- sensitive comments on proposed regulatory changes in a nonpublic docket. The importance of stakeholder input in the process of proposing regulatory changes is well established—it is the basis for the public comment period in the NPRM process. In order to improve FinCEN’s efforts to receive important information necessary to making decisions about proposed regulatory changes, we recommended that FinCEN develop a mechanism to collect law enforcement sensitive information from law enforcement agencies during the public comment period of the NPRM process. FinCEN agreed with our recommendation and stated that it would determine and implement appropriate ways to communicate to the law enforcement community its ability to receive and use law enforcement sensitive information in this context. In April 2010, FinCEN officials stated that they have developed an approach for collecting law enforcement sensitive information during the public notice and comment period of the NPRM process without making the comments publicly available. According to FinCEN officials, FinCEN will advise law enforcement, through the law enforcement liaisons, that they may provide law enforcement sensitive information at the time of publication of each NPRM and inform them that FinCEN will not post those comments or make them publicly available. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have. For questions about this statement, please contact Eileen R. Larence at (202) 512-8777 or larencee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact above, individuals making key contributions to this statement include Kirk Kiester, Assistant Director; Samantha Carter, and Linda Miller. Additionally, key contributors to our December 2009 report include Hugh Paquette, Miriam Hill, David Alexander, George Quinn, Jr., Billy Commons, Jan Montgomery, and Sally Williamson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Financial investigations are used to combat money laundering and terrorist financing, crimes that can destabilize national economies and threaten global security. The Financial Crimes Enforcement Network (FinCEN), within the Department of the Treasury, supports law enforcement agencies (LEAs) in their efforts to investigate financial crimes by providing them with services and products, such as access to financial data, analysis, and case support. This statement discusses the extent to which the law enforcement community finds FinCEN's support useful in its efforts to investigate and prosecute financial crimes. This statement is based on work GAO completed and issued in December 2009. In December 2009, we reported that the majority of 25 LEAs GAO surveyed found FinCEN's support useful in their efforts to investigate and prosecute financial crimes, but FinCEN could enhance its support by better informing LEAs about its services and products and actively soliciting their input. Of the 20 LEAs that responded to a question GAO posed about which FinCEN services they found most useful, 16 LEAs cited direct access to Bank Secrecy Act data--records of financial transactions possibly indicative of money laundering that FinCEN collects--as the most valuable service FinCEN provides. Additionally, 11 federal LEAs cited a tool that allows federal LEAs to reach out, through FinCEN, to financial institutions nationwide to locate financial information related to ongoing investigations as a key service offered by FinCEN. To further enhance the value and relevance of its analytic work to LEAs, FinCEN has sought to increase development of complex analytic products, such as reports identifying trends and patterns in money laundering. Sixteen law enforcement agencies GAO surveyed reported that they generally found these complex analytic products useful. However, we reported that three of five LEAs that are among FinCEN's primary federal customers stated that FinCEN does not provide detailed information about the various types of complex analytic products it can provide. Three of FinCEN's primary customers also stated that they would like more information about when completed products become available. In December 2009, we recommended that FinCEN clarify the types of complex analytic products it can provide to LEAs. FinCEN agreed with our recommendation and in April 2010 outlined plans to improve communication with law enforcement regarding FinCEN's services, products, and capabilities. All five LEAs also reported that FinCEN does not actively seek LEAs' input about ongoing or planned analytic products, though four of these LEAs believed that doing so could improve the quality and relevance of the products FinCEN provides to its customers. We recommended that FinCEN establish a process for soliciting input regarding the development of its analytic products. FinCEN agreed with our recommendation and in April 2010 outlined a number of steps it plans to take to better assess law enforcement needs, including ongoing efforts to solicit input from LEAs. Finally, liaisons from four of FinCEN's top five federal LEAs reported that their agencies do not have sufficient opportunities to provide input when FinCEN is considering regulatory changes because their comments often contain sensitive information that may compromise investigative techniques or strategies used in ongoing investigations. We recommended that FinCEN develop a mechanism to collect sensitive information regarding regulatory changes from LEAs. In April 2010, FinCEN reported that it developed an approach for collecting sensitive information without making the comments publicly available. |
The 1995 PRA reaffirms the principles in the original act and gives significant new responsibilities to OIRA and executive branch agencies. For example, the act requires OIRA to “oversee the use of information resources to improve the efficiency and effectiveness of governmental operations to serve agency missions,” and it makes more explicit agencies’ responsibilities in developing proposed collections of information and submitting them to OIRA for review. Like the original statute, the 1995 act requires agencies to justify any collection of information from the public by establishing the need and intended use of the information, estimating the burden that the collection will impose on the respondents, and showing that the collection is the least burdensome way to gather the information. Agencies must receive OIRA approval for each information collection request before it is implemented. The PRA also assigns OIRA other responsibilities, including information dissemination, statistical policy and coordination, records management, and information technology. Congress has also given OIRA other statutory responsibilities related to regulatory management. For example: The Unfunded Mandates Reform Act (UMRA) requires OIRA to collect agencies’ written statements describing the costs and benefits of their rules and to forward those statements to the Congressional Budget Office. UMRA also required OIRA to establish pilot projects in at least two agencies to test regulatory approaches that reduce the burden on small governments and to submit annual reports to Congress detailing agencies’ compliance with the act. The Small Business Regulatory Enforcement Fairness Act of 1996 (SBREFA) requires OIRA to designate certain rules as “major” and therefore subject to a 60 day congressional review period. SBREFA also amended the Regulatory Flexibility Act and required OIRA to serve on advocacy review panels involving rules that the Environmental Protection Agency (EPA) and the Occupational Safety and Health Administration (OSHA) intend to propose that the agencies believe will have a significant economic effect on a substantial number of small entities. Section 645(a) of the 1997 Treasury, Postal Services, and General Government Appropriations Act, required OIRA to submit to Congress by September 30, 1997, a report providing estimates of, among other things, the total annual costs and benefits of federal regulatory programs. In the equivalent appropriations act for fiscal year 1998, Congress repeated the requirement for another such report by September 30, 1998. OMB as a whole also has statutory responsibilities that are related to OIRA’s roles in the PRA. For example, under the Government Performance and Results Act of 1993 (the Results Act), OMB is charged with overseeing and guiding agencies’ strategic and annual performance planning and reporting, and it is responsible for preparing an annual governmentwide performance plan that presents a single cohesive picture of federal performance goals. The Results Act also calls for OMB to review agencies’ performance in view of the results the agencies are achieving with the resources they are given. The governmentwide performance plan that the Results Act requires OMB to prepare should, in part, reflect the governmentwide IRM strategic plan that the PRA requires OIRA to prepare. Similarly, OIRA’s reviews of agencies’ IRM activities under the PRA are logically related to OMB’s reviews of agencies’ performance and resource use under the Results Act. Also, like other federal agencies, the Results Act requires OMB to prepare its own strategic and annual performance plans and, beginning no later than March 31, 2000, to report to Congress annually on its progress toward achieving the goals in its annual performance plan for the previous fiscal year. Agencies’ performance plans are to establish connections between their long-term strategic plans and the day-to-day activities of managers and staff. The annual program performance reports are to discuss the extent to which agencies are meeting annual performance goals and the actions needed to achieve or modify those goals that have not been met.Congress can use these plans and reports to determine how agencies are carrying out their statutory missions. The Clinger-Cohen Act of 1996, which amended parts of the PRA, also gave OIRA significant leadership responsibilities in supporting agencies’ efforts to improve their information technology management practices. Shortly after the passage of the act, we reported that OMB faced a number of challenges in this area, one of which was to develop recommendations for the president’s budget that reflect an agency’s actual track record in delivering mission performance for information technology funds expended. We specifically recommended that OMB, among other things, clearly show what improvements in mission performance have been achieved for information technology investments. In addition to these statutory responsibilities, two executive orders have made OIRA responsible for providing overall leadership of other executive branch regulatory activities and for reviewing executive departments’ and agencies’ proposed and final regulations before they are published in the Federal Register. Executive Order 12291, issued in 1981 shortly after the original PRA was enacted, gave OMB the authority to review all new regulations issued by executive departments and agencies (other than independent regulatory agencies) for consistency with administration policies. In 1993, that order was revoked and replaced by Executive Order 12866, but the new order reaffirmed OMB’s responsibilities for regulatory review and leadership. The order specifically stated that OIRA is the repository of expertise concerning regulatory issues, including matters that affect more than one agency. In calendar years 1995 through 1997, OIRA staff members reviewed approximately 500 significant proposed and final rules each year from executive departments and agencies pursuant to Executive Order 12866. The order also gives OIRA other responsibilities, including convening a regulatory working group comprising representatives of major regulatory agencies. With both statutory and executive order responsibilities, OIRA plays a dual role in the management of federal regulatory, paperwork, and information policies. It must carry out the responsibilities that Congress has given it through legislation while, at the same time, serving as an advisor to and implementor of presidential policy initiatives. OMB as a whole must similarly balance its statutory responsibilities and its responsibilities as a staff office to the president. We have issued a number of reports on the PRA since it was first enacted in 1980, several of which have focused on OIRA’s responsibilities. For example, in 1983 we concluded that OIRA had made only limited progress in several IRM-related areas of the act and that the primary reason was the decision to assign OIRA primary responsibility for the administration’s regulatory reform program without additional resources. In that report, we recommended that the OMB Director identify in the agency’s budget program and financing schedule the resources needed to implement the PRA and assess the feasibility of assigning existing resources to address the act’s requirements. We also suggested that Congress consider requiring OMB to (1) identify the resources it needed to implement the act and report annually on those expenditures, (2) provide a separate appropriation for the PRA’s implementation, or (3) provide a separate PRA appropriation and prohibit OIRA from performing any duties other than those required in the act. In 1989, we reported that OIRA had established a formal process to review the 3,000 to 4,000 information collection requests it received each year, but those policies were not being consistently applied. We also noted that OIRA almost always approved requests from agencies with established review procedures, and we recommended that OIRA delegate primary review responsibility to senior officials in those agencies. More recently, in both 1996 and 1997, we testified on the implementation of selected features of the 1995 PRA. In both of our statements, we noted that the governmentwide burden-reduction goals contemplated in the PRA were unlikely to be met and that agencies often cited statutory constraints as the primary reason. For example, Internal Revenue Service (IRS) officials said that they would not be able to reduce their fiscal year 1995 paperwork totals by more than about 2 percent by the end of fiscal year 1998 unless major changes are made to the tax code. Because IRS has accounted for at least 75 percent of the government’s estimated burden-hour total in each year since 1989, we said it appeared unlikely that the federal government as a whole would meet the 25 percent burden-reduction goal contemplated in the act. As we noted in our June 1997 testimony, it is important to remember that some federal paperwork is necessary and can serve a useful purpose. Information collection is one method by which agencies carry out their missions. For example, IRS needs to collect information from taxpayers and their employers to know the amount of taxes owed. EPA and OSHA must collect information to know whether the intent of such statutes as the Clean Air Act and the Occupational Safety and Health Act are being achieved. The Results Act may require agencies to collect information that they had not previously collected in order to demonstrate their effectiveness. However, the Results Act may also help agencies eliminate certain paperwork requirements and keep the amount of paperwork as low as possible by focusing agencies’ information collection actions on only those collections needed to accomplish their missions. The objectives of our review were to assess how OIRA has implemented three of its information collection responsibilities under the PRA: We looked at how OIRA reviews and controls paperwork, including (1) reviewing and approving agencies’ information collection requests; (2) establishing and overseeing guidance for estimating information collection burden; (3) setting annual governmentwide goals for the reduction of that burden by at least 10 percent in fiscal years 1996 and 1997, 5 percent during the next 4 fiscal years, and setting annual agency goals that reduce paperwork to the “maximum practicable opportunity”; and (4) conducting pilot projects to test alternative policies and procedures to minimize information collection burden. We examined OIRA’s development and oversight of federal IRM policies, including developing and maintaining a governmentwide IRM plan and periodically reviewing selected agency IRM activities to determine their ability to improve agencies’ performance and accomplish agencies’ missions. We looked at whether OIRA is keeping Congress and congressional committees fully and currently informed about major activities under the act. To determine what actions OIRA had taken in these areas, we analyzed OIRA’s reports to Congress and other documents since the act passed in 1995; and we interviewed several OIRA officials and staff members, including the Acting Administrator. We then compared our understanding of OIRA’s actions in these areas with the PRA’s requirements and its legislative history. We also obtained OIRA staffing information from agency officials and data from the Regulatory Information Service Center (RISC) on the number of OIRA actions related to the information collection requests that it reviewed since the 1995 act was passed, including information on the types of requests submitted and the disposition of those reviews. To put these data in a larger perspective, we also obtained information on OIRA staffing and actions back to 1981, when OIRA was created by the original PRA. We focused our review solely on OIRA’s implementation of the specific responsibilities delineated in the objectives. We did not examine the implementation of OIRA’s other PRA responsibilities, including its responsibilities in the areas of federal information technology, records management, and statistical policies. Neither did we examine agencies’ information collection responsibilities under the act; the quality of OIRA’s information collection request reviews; or OMB’s or OIRA’s actions to develop information policies (e.g., OMB Circular A-130). Although OIRA’s role as a staff office to the president makes it unique in some respects, this study evaluates OIRA’s performance of specific statutory responsibilities for which it is accountable to Congress like any other agency. We conducted our review between January and May 1998 in accordance with generally accepted government auditing standards. At the conclusion of our review, we sent a draft of this report to OIRA for comment; its comments can be found at the end of this letter. The 1995 PRA assigns OIRA significant responsibilities for paperwork review and control, including (1) the review and approval of agencies’ proposed collections of information, (2) the establishment and oversight of guidance for estimating information collection burden, (3) setting governmentwide and agency specific goals for the reduction of information collection burden, and (4) conducting pilot projects to test alternative policies and procedures to minimize information collection burden. In each of these areas, OIRA officials described certain actions that they had taken or that were ongoing that they believed were consistent with the overall intent of the PRA’s provisions. However, we believe that OIRA’s actions in several of these areas fell short of the act’s specific requirements. As figure 1 shows, OIRA is currently organized into five branches. Three of those branches (Commerce and Lands, Human Resources and Housing, and Natural Resources) are primarily responsible for the office’s paperwork and regulatory review functions. Certain OIRA staff within each of these branches, known as “desk officers,” are responsible for reviewing proposed information collections and proposed rules from specific agencies. For example, one desk officer in OIRA’s Commerce and Lands branch is primarily responsible for reviewing the regulatory and information collection proposals submitted by the Department of Transportation and the Federal Trade Commission. The two remaining OIRA branches (Information Policy and Technology Management and Statistical Policy) are primarily responsible for other functions assigned by the PRA. However, some staff in those branches review proposed information collections from certain agencies, and other staff may be involved in paperwork and regulatory reviews when called upon by staff in the other three branches. As shown in figure 2, OIRA had 77 employees when it was established in 1981. However, by 1997, OIRA had decreased in size to 48 employees—a 38-percent reduction since 1981. As previously noted, not all of OIRA’s employees are directly involved in reviewing agencies’ information collection requests. Some employees in the agency’s Information Policy and Technology Management and Statistical Policy branches do not review proposed information collections, and others are in support or managerial positions. In 1989, we reported that OIRA employed about 35 desk officers to review agencies’ information collection submissions each year. OIRA officials told us that since the PRA was passed in 1995, between 20 and 25 desk officers have been primarily responsible for reviewing proposed information collections. In 1997, OIRA had 22 desk officers reviewing submissions—about a 35-percent reduction from the level in 1989. Section 3504(c)(1) of the PRA states that OIRA shall “review and approve proposed agency collections of information.” The act also says that OIRA must complete its review of agencies’ information collection requests within 60 days of the date that they are submitted to OIRA. However, the act does not prescribe a single way of reviewing proposed information collections. Therefore, OIRA desk officers have considerable statutory discretion in determining how much time and attention to devote to different parts of the submission and in deciding whether to approve the proposed collection or dispose of it in some other way. OIRA desk officers told us that the agencies requesting OIRA approvals for proposed collections of information initiate OIRA’s review process by submitting a copy of the proposed collection, an OMB form summarizing how the proposed collection meets the PRA requirements, and a written supporting statement providing more details about the collection. They said this information is initially sent to OIRA’s docket library, where it is logged in and forwarded to the relevant branch and desk officer. At the same time, the submitting agency issues a notice in the Federal Register stating that OIRA’s approval is being sought, thereby providing the public with an opportunity to comment on the proposed collection. Information collection requests awaiting OIRA’s approval are also posted on the agency’s electronic bulletin board. The OIRA desk officer then reviews the information collection request and determines whether it should be approved. OIRA desk officers told us that some information collection requests require greater effort and take more time to review than others—e.g., those that are new submissions (as opposed to renewals of existing information collections); that impose heavy paperwork burdens; and that relate to an administration initiative (e.g., welfare reform). The desk officers also said that information collection requests that receive only a limited review at the agencies also require more intensive review at OIRA. For example, they said that the Department of Agriculture has only one person responsible for reviewing proposed information collections for the entire Department. As a result, they said that they have to review the Department’s information collection requests more intensively than submissions from other agencies that have devoted more staff to information collection reviews. If the request is a new information collection, the OIRA desk officers said that they first review any relevant statutes to determine whether the proposed collection is required to fulfill the purposes of the statutes and whether other less burdensome options could meet those purposes. They also said they focus on how the proposed collection meets each of the PRA requirements summarized on the accompanying form. The desk officers told us that they often review the information collections in the context of the agencies’ programs and missions, and they are beginning to consider whether the proposed collections are linked to strategic plans that the agencies recently submitted under the Results Act requirements. The desk officers also said that a key part of their review is an attempt to validate agencies’ burden-hour estimates. Some of the desk officers said that they do so by attempting to complete the proposed information collections as a respondent, keeping track of how long it takes to collect and provide the information. However, other desk officers said that they use other approaches to validate agencies’ burden-hour estimates. All of the desk officers whom we spoke to said they frequently pose questions to the agencies about their proposed information collections, and any memoranda or letters related to those questions are placed in OIRA’s public docket. They also said that they review agencies’ summaries of public comments regarding the proposed collections and any public comments sent directly to OIRA. However, they also said that the public frequently submits no comments to either the agencies or OIRA. At the end of OIRA’s review process the desk officers said that their initial determinations are reviewed by the branch chief and, if necessary, the Deputy Administrator. They then notify the agency proposing the information collection of the disposition of its request, and the disposition is posted to OIRA’s electronic bulletin board. According to the PRA, information collection requests may be approved for up to 3 years, at which time they must be resubmitted to OIRA for approval if the agency wishes to continue to collect the information. The desk officers said that they typically complete their reviews of proposed information collection requests within the 60 days permitted in the PRA. They also said that their day-to-day work reviewing agencies’ information collection requests did not substantially change as a result of the 1995 revisions to the PRA. Section 3511 of the PRA requires OIRA to establish and maintain a Government Information Locator Service (GILS) to assist agencies and the public in locating information and to promote information sharing and equitable access by the public. OIRA staff with whom we spoke said they do not use GILS to identify potentially overlapping agency information collection requests. They said that they were generally aware of potential information collection overlaps, and if unsure they would consult other desk officers or other OMB staff. RISC’s data on OIRA’s activities under the PRA are based on the number of actions the agency takes pursuant to agencies’ information collection requests. As shown in figure 3, the total number of OIRA actions has fluctuated during the past 17 years, but it has generally been between 3,000 and 5,000 actions each year. The figure also illustrates how those OIRA actions were distributed across the various types of information collection request submissions (e.g., new collections; revisions; and other types of submissions, such as extensions and reinstatements). The number of OIRA actions on new information collection requests has declined since the first several years of the act. Within the last several years there has been an increase in the number of actions in the “other” category, particularly requests for extensions of original approvals and reinstatements of elapsed information collections. Figure 4 also shows the number of OIRA actions each year between 1981 and 1997, and it shows how OIRA acted upon each of the agencies’ information collection requests. The figure illustrates that the majority of OIRA actions in each year were approvals, followed by corrections and other actions (e.g., disapprovals, short-term extensions of existing approvals, and agency withdrawals of requests). The number of “other” dispositions increased in the 2 years following the enactment of the PRA in 1995, due largely to an increase in the number of short-term extensions of information collections for less than the full 3 years permitted in the act. However, the number of OIRA disapprovals of proposed collections of information declined from more than 200 in 1981 and 1982 to fewer than 15 in each year since 1993. OIRA officials and staff told us that this decline in the number of disapprovals reflects the fact that agencies have learned over time what the PRA requires and also illustrates a change in the way in which OIRA and the agencies interact. They said that during the Reagan and Bush administrations, OIRA’s interactions with the agencies were more contentious; as a result, more information collection requests were disapproved, resubmitted with changes made, and then approved. However, OIRA officials said the Clinton administration has emphasized working collegially with the agencies to resolve differences, so the number of initial disapprovals has declined. Proposed information collections that, in the past, had been initially disapproved are now frequently “approved with changes.” They also pointed out that the increased number of short-term extensions reflects a measure of OIRA concern about the proposed collection. The total number of PRA actions that OIRA has taken each year has been relatively constant since the original PRA was enacted, but (as previously noted) the number of OIRA desk officers available to review proposed collections of information declined during this period. Therefore, the PRA-related workload per OIRA desk officer has increased since the 1980s. One OIRA desk officer told us that she typically has between 20 and 30 proposed information collections on her desk at any one time. However, she pointed out that some of these proposals are renewals of previously approved information collections that do not require substantial effort. She said that she manages the workload through an informal “triage” system, in which proposed information collections are ranked in terms of the degree of attention required. Section 3504(c)(5) of the PRA requires OIRA to “establish and oversee standards and guidelines by which agencies are to estimate the burden to comply with a proposed collection of information.” In August 1995, OIRA issued final regulations that, among other things, reflect the changes that Congress made in the act regarding how the terms “collection of information” and “burden” are defined. For example, the preamble to the regulation notes that the 1995 act redefined burden to include the total time, effort, or financial resources expended to generate, maintain, retain, disclose, or provide information to a federal agency. However, the preamble and the regulation contain only general instructions to agencies on how they should estimate the burden associated with their information collections. OIRA is in the process of developing more detailed guidance for agencies and OIRA desk officers to use in implementing the PRA. Although the guidance was still in draft when we developed this report, OIRA officials said that it has been widely used by both agencies and OIRA staff since early 1997. The guidance contains a section on burden that specifically references the OIRA responsibilities in section 3504(c)(5) of the act and describes the various types of activities that the act says constitute burden. That section of the guidance also references an appendix with suggested worksheets that are designed to help an agency calculate burden-hours. The appendix describes actions agencies could take to estimate (1) burden-hours per respondent, (2) aggregate burden-hours, (3) capital and other nonlabor costs per respondent, and (4) aggregate capital and other non labor costs. Although the guidance indicates that agencies should estimate the time it takes for respondents to undertake various elements of paperwork activity (e.g., reviewing instructions, searching data sources, and completing and reviewing the collection of information to arrive at the number of burden-hours per respondent), it does not clearly indicate how agencies are to arrive at these estimates or provide examples of how different agencies have estimated the burden associated with particular information collections. OIRA officials and staff said that there are differences both between and within departments and agencies in how they estimate the burden associated with their information collections. For example, they said that IRS estimates the burden associated with its information collections partly on the basis of the number of lines on the forms to be completed, but other agencies calculate burden in ways unrelated to the number of lines on each form. OIRA officials said they believe it is less important that agencies measure their information collections in the same way than that the measurements are consistent over time within particular agencies. Consistency over time, they said, permits OIRA to determine whether the burden associated with specific agencies’ information collections is increasing or decreasing. Measuring the number of burden-hours associated with individual information collections or for an agency as a whole is extremely difficult, as illustrated by agencies’ reestimates of their burden-hour totals and the magnitude of those adjustments. For example, in 1989, IRS did a comprehensive reassessment of all of its existing data collections, resulting in a 3.4 billion hour increase in its burden-hour estimate.However, this change did not reflect any alteration in the actual paperwork burden felt by the public because only the measurement system used to produce this estimate was altered. A recent analysis of IRS’s current burden-hour estimate methodology concluded that the agency may be overstating businesses’ paperwork burden by nearly 400 percent. Agencies should review and, if necessary, revise their methods for estimating burden-hours. However, these large fluctuations in burden-hour estimates by IRS, an agency that constitutes 75 percent of the governmentwide total, illustrate a continuing need for clear guidance on how paperwork burden can be measured. Although a single methodology may not be feasible for all agencies or for all information collections, the PRA clearly contemplated that OIRA would play a critical role in the development of governmentwide guidance and the achievement of reliable and valid measurements of paperwork burden. Although OIRA has taken some steps in this area, it has not fully played that role. One of the PRA’s key features is the requirement in section 3505(a) of the act that OIRA, in consultation with the agency heads, set annual governmentwide goals for the reduction of information collection burdens by at least 10 percent in fiscal years 1996 and 1997 and by at least 5 percent in the succeeding 4 fiscal years. The act also requires OIRA to establish annual agency goals to (1) reduce information collection burdens imposed on the public that “represent the maximum practicable opportunity in each agency” and that are consistent with improving agencies’ review processes; and (2) improve IRM in ways that increase the productivity, efficiency, and effectiveness of federal programs. In our June 1996 testimony on the implementation of the PRA, we said that OIRA had not set either governmentwide or agency-specific burden-reduction goals as required by the act. OIRA officials told us at the time that they planned to set the fiscal year 1996 governmentwide burden-reduction goal when they published their information collection budget (ICB) later that year, and they said that the goal would be the 10-percent reduction for fiscal year 1996 required in the act. They also said that the agency goals would reflect the end of fiscal year 1996 burden-hour estimates that the agencies provided in their ICB submissions—essentially, what the agencies expected their burden-hour totals would be by the end of the fiscal year—unless changed as a result of OIRA review. We noted in our testimony that the weighted average of the agencies’ burden-reduction projections for fiscal year 1996 was about 1 percent governmentwide. “individual agency goals negotiated with OIRA may differ depending on the agency’s potential to reduce the paperwork burden such agency imposes on the public. Goals negotiated with some agencies may substantially exceed the Government-wide goal, while those negotiated with other agencies may be substantially less.” In August 1996, OIRA formally set agency-specific burden-reduction goals for fiscal year 1996 by publishing the ICB in its Information Resources Management Plan of the Federal Government. The agencies estimated that in the aggregate, their burden-hour totals at the end of fiscal year 1996 (less than 2 months later) would be less than 1 percent below their totals at the end of fiscal year 1995. However, in a subsequent ICB the agencies estimated that the fiscal year 1996 reductions were about 2.6 percent—still far short of the 10 percent governmentwide burden-reduction goal contemplated in the act for that year. OIRA officials told us at the time that OIRA had satisfied the PRA’s requirement that it set governmentwide burden-reduction goals by repeating the act’s requirements in the ICB. In January 1997, OMB issued Bulletin 97-03, which instructed executive departments and agencies to prepare and implement ICBs and information streamlining plans that would include “goals and timetables to achieve, by the end of 1998, a cumulative burden reduction of 25 percent from their 1995 year-end level, consistent with the governmentwide burden-reduction goals in the Paperwork Reduction Act of 1995.” OIRA officials said that they decided to set a 3-year goal instead of the year-to-year goals required in the act because many approved collections are approved for 3 years, and it would take that long for associated paperwork reductions to be implemented. Although the January 1997 bulletin indicated that each agency’s burden-reduction goal should be consistent with the 25 percent governmentwide goal envisioned in the PRA by the end of fiscal year 1998, OIRA officials again told us during this review that the act does not require that the agencies’ goals total to 25 percent by that date. In our June 1997 testimony, we noted that OIRA had not published the ICB for fiscal year 1997 and, therefore, had not formally established agencies’ burden-reduction goals for that year. We also noted that all three of the regulatory agencies that we examined in that review (EPA, OSHA, and IRS) said that the statutory framework underlying their regulations and/or continued actions by Congress requiring the agencies to produce regulations were major impediments to eliminating paperwork burden. For example, IRS said that it could not reach a 25 percent burden-reduction goal of eliminating more than 1 billion burden-hours of paperwork under its current statutory framework and still carry out its mission. Of the three agencies, only OSHA indicated that it would achieve the 25 percent burden-reduction goal by the end of fiscal year 1998. In September 1997, OIRA set agency-specific burden-reduction goals for fiscal year 1997 by publishing the ICB in its Reports to Congress Under the Paperwork Reduction Act of 1995. The agencies’ aggregate burden-hour estimate for the end of fiscal year 1997 (less than 1 month later) was less than 2 percent below the total for fiscal year 1996. In combination with the reductions in the previous fiscal year, the agencies estimated that their total reductions by the end of fiscal year 1997 from the fiscal year 1995 baseline would be about 4.4 percent. Therefore, in order to meet the 25-percent reduction by the end of fiscal year 1998 that was contemplated in the PRA and indicated in OMB’s January 1997 bulletin, federal agencies would have to reduce their paperwork burden by more than 20 percent during fiscal year 1998. This scenario is unlikely because, as previously noted, the agency that accounts for 75 percent of the governmentwide total (IRS) has indicated that it can reduce its burden by only about 2 percent by the end of fiscal year 1998. OIRA officials told us during this review that the ICB establishing burden-reduction goals for fiscal year 1998 will not be published until later this year. Section 3505(a)(2) of the PRA requires OIRA to conduct pilot projects with selected agencies and nonfederal entities on a voluntary basis to test alternative policies, practices, regulations, and procedures to fulfill the purposes of the act, particularly with regard to minimizing the federal information collection burden. OIRA officials said that they have not formally established any pilot projects specifically for this purpose. However, they consider the three pilot projects used to satisfy UMRA’s pilot project requirement to also satisfy the PRA’s pilot requirement. Section 207 of UMRA requires OMB to establish pilot projects in at least two agencies to test innovative, flexible regulatory approaches that reduce reporting and compliance burdens on small governments. However, as we noted in our February 1998 UMRA report, the pilots that OIRA identified as satisfying the UMRA requirements were not started because of UMRA. In fact, at least two of the pilots appear to have been initiated as a result of recommendations from the National Performance Review in September 1993—before either UMRA or the PRA were enacted. Furthermore, the UMRA pilots are confined to only one segment of the nonfederal population (small governments) that are required to provide information to or for federal agencies. OIRA officials noted that other projects were ongoing in certain agencies that could accomplish the underlying purpose of the PRA pilots, including (1) the Simplified Tax and Wage Reporting System, a joint project of IRS and the Department of Labor that would permit companies to electronically file all federal and state tax information at once; and (2) the International Trade Data System, an interagency effort led by the Department of the Treasury to design and build shared systems for gathering, distributing, and storing foreign trade data. Past funding for these two projects has supported research and small prototypes. The President’s budget for fiscal year 1999 asks for funds to begin full-scale development of these systems. The 1995 PRA defined information resources as “information and related resources, such as personnel, equipment, funds, and information technology.” The act also defined information resources management as “the process of managing information resources to accomplish agency missions and to improve agency performance, including through the reduction of information collection burden on the public.” These new definitions emphasize the link between IRM and program outcomes and make agencies’ use of information resources consistent with the goals of the then recently enacted Results Act. The 1995 PRA refocused OIRA’s role on integrating information resources management with program management and concentrating on program outcomes as the standard for overseeing the efficiency and effectiveness of IRM. The 1995 act also stressed the linkage between IRM and the reduction of paperwork burden on the public. Using the 1995 PRA’s definition of IRM and its emphasis on the use of information resources to achieve and measure progress toward outcomes, in this portion of our review we focused on two of OIRA’s specific IRM-related PRA responsibilities: (1) its responsibility to develop and maintain a governmentwide IRM strategic plan and (2) its responsibility to periodically review selected agency IRM activities to ascertain the efficiency and effectiveness of such activities to improve agency performance and the accomplishment of agency missions. We concluded that although OIRA has undertaken a number of IRM-related activities, the agency has not fully satisfied its PRA responsibilities. Section 3505(a)(3) of the PRA requires OMB, in consultation with other agencies, to “develop and maintain a Governmentwide strategic plan for information resources management.” The act states that the plan should include (1) a description of the objectives and means by which the federal government shall apply information resources to improve agency and program performance; (2) plans for reducing information burdens on the public, enhancing public access to and dissemination of information, and meeting the information technology needs of the federal government; and (3) a description of agencies’ progress in applying IRM to improve their performance and the accomplishment of their missions. OIRA officials told us that their August 1996 report Information Resources Management Plan of the Federal Government satisfied the requirement for an IRM strategic plan for fiscal year 1996—the first year after the 1995 PRA was enacted. The OIRA report was similar in content to other documents that OIRA had published for several years before the enactment of the PRA and contained four principal parts: (1) a discussion of federal obligations for information technology resources (i.e., computer and telecommunications hardware, software, and services); (2) the ICB for fiscal year 1995; (3) a brief discussion of federal information dissemination activities; and (4) a brief discussion of agencies’ compliance with the information policy provisions of OMB Circular Number A-130. OIRA officials also told us that their September 1997 publication Reports to Congress Under the Paperwork Reduction Act of 1995 satisfied the PRA’s requirement for an IRM strategic plan for fiscal year 1997. Similar in format to OIRA’s August 1996 report, the September 1997 report contained sections on federal information technology obligations, the ICB for fiscal year 1996, federal information dissemination activities, and agencies’ compliance with OMB Circular A-130. OIRA officials also told us that the CIO Council’s January 1998 Strategic Plan met the PRA requirement that it develop a governmentwide IRM strategic plan. The CIO Council that developed the report was chaired by the OMB Acting Deputy Director for Management, and the foreword to the strategic plan notes the requirement in section 3505(c)(3) of the act that OIRA develop a governmentwide IRM strategic plan. The CIO Council’s strategic plan contained sections on (1) defining an interoperable federal information architecture, (2) ensuring security practices that protect government services, (3) leading the federal year 2000 conversion effort, (4) establishing sound capital planning and investment practices, (5) improving the information technology skills of the federal workforce, and (6) building relationships and outreach programs. The plan states that its primary purpose is “to articulate the Council’s vision and strategic priorities for managing Federal resources over the long-term and to define its near-term commitments in beginning implementation.” Although a governmentwide IRM strategic plan can be structured in many ways (e.g., presenting highlights from different agencies or focusing on crosscutting issues), none of the reports that OIRA cited appear to have met all of the PRA requirements for such a plan. For example, although these reports contained a few examples of how agencies are using information technology, none of the reports clearly discussed the objectives and means by which the federal government would use all types of information resources to improve agency and program performance. Although both of the OIRA reports contained examples of how agencies had reduced information collection requirements, neither report described agencies’ progress in applying IRM to improve their performance or mission accomplishment. As we noted in our testimony last October, we believe that the strategic goals agreed to by the CIO Council (and later included in its strategic plan) are the right set of issues to pursue regarding information technology management. However, we also noted that the CIO Council lacked a “visible yardstick” to provide an incentive for progress in meeting information management goals and demonstrating positive impact on the agencies’ bottom line performance. Also, the CIO Council’s strategic plan focused primarily on information technology issues, which the PRA indicates is only one part of information resources or IRM. In June 1997, we reported on five regulatory agencies’ efforts to focus on results and the factors that they believed assisted or impeded these efforts. Although officials from all five agencies said that they found it difficult to establish outcome-oriented program performance measures because of problems they experienced in collecting necessary data, several of the agencies had developed measures that we considered at least somewhat results oriented. For example, the Federal Aviation Administration measured progress toward its strategic goal of “system safety” by collecting such information as the number of fatalities per million passenger miles and the number of accidents and runway incursions that occurred each year. IRS assessed accomplishment of its strategic objective of improving customer service by collecting information on the rate at which taxpayer issues were resolved during the first contact with IRS. Also, OSHA has collected information on the number of accidents, injuries, and deaths within 80,000 workplaces in order to better target its enforcement activities. These kinds of information and performance measures are examples of how information resources can be used to direct, assess, and, ultimately, improve agencies’ performance. A governmentwide IRM strategic plan, however it is constructed, can highlight these kinds of efforts and encourage agencies to make greater use of information resources to accomplish their missions. Section 3513(a) of the PRA requires OIRA, in consultation with other agencies, to “periodically review selected agency information resources management activities to ascertain the efficiency and effectiveness of such activities to improve agency performance and the accomplishment of agency missions.” Agencies’ general IRM responsibilities are delineated in section 3506(b) of the act, which requires them to, among other things, (1) develop a strategic IRM plan that describes how IRM activities help accomplish agency missions; and (2) develop and maintain an ongoing process to “establish goals for improving IRM’s contribution to program productivity, efficiency, and effectiveness, methods for measuring progress towards those goals, and clear roles and responsibilities for achieving those goals.” OIRA officials and desk officers identified a number of activities that they believe constitute a review of agencies’ IRM activities. The desk officers said that they conduct those reviews as part of their analyses of agencies’ individual information collection requests and proposed regulations. Working with specific agencies over time, the desk officers said they develop an understanding of the agencies’ IRM activities that becomes part of the policy context in which they assess those requests and rules. OIRA officials also told us that their reviews of agencies’ activities under the Clinger-Cohen Act also satisfy the PRA requirement that OIRA review agencies’ IRM actions. They said that one of the most important activities required by Clinger-Cohen is the selection of agencies’ CIOs, and OIRA participates in that process to try and ensure that the CIOs have access to agency heads, are qualified for the positions, and have written job descriptions that are consistent with the statutory requirements. Finally, OIRA officials said that they review agencies’ IRM activities as part of the budget development and execution process within OMB. They noted that the Clinger-Cohen Act requires OIRA to report annually on the “net program performance benefits achieved as a result of major capital investments made by executive agencies in information systems and how the benefits relate to the accomplishment of the goals of the executive agencies.” They said that the President’s budget for fiscal year 1999 satisfies this reporting requirement by linking agencies’ capital investments to agencies’ goals and activities under the Results Act. Overall, OIRA officials said that they view the agencies’ IRM responsibilities as including all of the agency responsibilities in section 3506 of the act, including not only the general IRM requirements in section 3506(b) but also the requirements in the other sections relating to collections of information and control of paperwork, information dissemination, statistical policy and coordination, records management, and privacy and security. They also said that the PRA requirement that OIRA review agencies’ IRM activities is part of the “daily life” of the agency and OMB. However, they said that they do not view this section of the act as requiring OIRA or OMB to undertake any special action or review. Although OIRA officials said that they view agencies’ IRM responsibilities as including all of the requirements in section 3506 of the PRA, section 3506(b) specifically delineates what agencies must do “ith respect to general information resources management.” Therefore, OIRA should, at a minimum, review agencies’ implementation of their responsibilities under section 3506(b) of the PRA. Also, although OIRA officials said that they review agencies’ IRM activities through a variety of vehicles, it is not clear how all of the vehicles that they mentioned relate to the two agency IRM requirements that we examined in section 3506(b). For example, OIRA’s participation in the selection of agencies’ CIOs and its review of agencies’ information system investments in the budget process do not constitute a review of agencies’ IRM strategic plans or the agencies’ goals for improving IRM’s contribution to program productivity, efficiency, and effectiveness. Also, OIRA does not require agencies’ individual information collection requests to present the agencies’ IRM strategic plans or IRM goals. Therefore, unless the agencies include that information on their own in the supplementary information, OIRA’s reviews of agencies’ information collection requests cannot satisfy its responsibilities to review agencies’ IRM activities. However, the desk officers indicated that they are beginning to consider whether the proposed collections are linked to agencies’ strategic plans under the Results Act. If so, these individual information collection requests can be viewed in the larger context of program effectiveness and agency mission accomplishment that the PRA envisioned. As noted previously, OMB’s reviews of agencies’ information technology investments during the budget process can link one element of agencies’ IRM activities to the agencies’ missions and performance. However, OMB does not explicitly require agencies to present in their budget submissions an IRM strategic plan or to establish agencywide goals for improving IRM’s contribution to program productivity, efficiency, and effectiveness.Therefore, it is not clear how OMB’s reviews of agencies’ budget submissions constitute a review of what the PRA specifically identifies as agencies’ IRM responsibilities. Section 3514(a) of the PRA states that OIRA must “keep Congress and congressional committees fully and currently informed of the major activities under ,” and it requires OIRA to “submit a report on such activities to the President of the Senate and the Speaker of the House of Representatives annually and at such other times as [OIRA] determines necessary.” The PRA says that any such report must contain a description of the extent to which agencies have reduced information collection burdens on the public and should specifically include (1) a summary of accomplishments and planned initiatives; (2) a list of all violations of the act’s requirements; (3) a list of any increases in the collection of information burden (including the authority for each such collection); and (4) a list of agencies that did not reduce information collection burdens in accordance with the goals established in section 3505(a)(1), a list of the programs and statutory responsibilities of those agencies that precluded that reduction, and recommendations to assist those agencies to reduce their information collection burdens. The PRA also specifies that OIRA’s annual report must contain a description of the extent to which agencies have improved program performance and mission accomplishment through IRM. OIRA officials told us that their August 1996 Information Resources Management Plan of the Federal Government and their September 1997 Reports to Congress Under the Paperwork Reduction Act of 1995 have served as the primary vehicles by which they have satisfied the PRA’s reporting requirement. As noted previously, the reports contained information on federal information technology obligations, the ICBs, federal information dissemination activities, and agencies’ compliance with OMB Circular A-130. They said that the ICBs included most of the information required under section 3514(a) of the PRA. For example, they noted that the ICB for fiscal year 1997 included the burden reduction goals in the PRA, the overall and agency-specific burden reductions between fiscal years 1995 and 1996, and the estimated reductions by the end of fiscal year 1997. OIRA officials also said that they have fulfilled the reporting requirement in other documents, including the CIO Council’s Strategic Plan and various statistical reports published by OIRA’s Statistical Policy Branch. Finally, they noted that they have testified at numerous hearings on the PRA and have responded to individual requests for information about PRA implementation from Members and committees of Congress. In our June 1996 testimony on the implementation of the PRA, we said that we did not believe that OIRA had kept Congress fully and currently informed about why it had not established any of the burden-reduction goals required in section 3505 of the act. We also noted that OIRA had not informed Congress that the 10 percent governmentwide burden-reduction goal envisioned in the act for fiscal year 1996 would not be met. We believed that both of these issues were “major activities” under the act and that OIRA should have informed Congress of those activities. As previously noted, OIRA established agency-specific burden-reduction goals through the publication of its fiscal year 1996 and fiscal year 1997 ICBs in its August 1996 and September 1997 reports to Congress. Although the ICBs in these reports presented the changes in burden-hour estimates from year to year, neither of the reports clearly stated that the governmentwide burden-reduction goals contemplated in the act were unlikely to be met. Neither did those reports indicate that OIRA believes that the sum of the individual reduction goals that is the maximum practicable for each agency need not equal the governmentwide goal. We believe these are also major activities under the PRA about which OIRA should have kept Congress and congressional committees fully and currently informed. OIRA’s August 1996 and September 1997 reports contained some of the specific elements that the PRA requires in OIRA’s annual reports (e.g., burden-reduction accomplishments and initiatives and violations of the act). Similarly, the other reports and actions that OIRA mentioned contained discussions of other PRA-related activities. However, other elements that the PRA requires in those reports were missing. For example, the reports did not list the authority for each information collection whose burden increased. Also, for agencies that did not meet their burden-reduction goals, the reports did not list the programs and statutory responsibilities that prevented the agencies from achieving the goals or recommendations to assist those agencies to reduce burden. None of the reports that OIRA officials mentioned contained information on how agencies had improved program performance and the accomplishment of agency missions through IRM—clearly a major focus of the 1995 PRA. Neither did those reports discuss what OIRA had done to carry out all of its major activities required by the act. For example, OIRA has not clearly and succinctly described its reviews of agencies’ IRM activities, which may in part be due to the fact that OIRA does not view this requirement as necessitating any type of separate activity. In February 1998, OMB submitted its performance plan for fiscal year 1999 to Congress pursuant to the Results Act. In that plan, OMB said that one of its performance goals was to “ork with agencies to reduce paperwork burdens.” OMB noted the PRA requirement that OIRA set a governmentwide goal of reducing information collection burdens by at least 5 percent in fiscal year 1999 and said it works with agencies to set goals to reduce burdens to the “maximum extent practicable.” OMB also noted that it submits an annual report to Congress describing these goals and agency progress toward meeting them. However, OMB did not indicate in the performance plan that the governmentwide goal was unlikely to be met or that it believes that the sum of the agency-specific goals does not have to equal the governmentwide goal. Also, OMB’s performance plan does not identify the specific strategies and resources that it will use to achieve this performance goal, nor does it provide performance measures that would allow Congress and the public to determine how well OMB is achieving these goals. This report examines some, but not all, of OIRA’s specific responsibilities under the 1995 PRA. Although OIRA officials noted a variety of actions that the agency had taken regarding those responsibilities, we do not believe that OIRA has fully satisfied the act’s requirements in any of the three areas we examined: (1) reviewing and controlling paperwork, (2) developing and overseeing federal IRM policies, and (3) keeping Congress and congressional committees fully and currently informed about major activities under the act. For example, in the area of paperwork review and control, the PRA requires OIRA to set both governmentwide and agency-specific burden-reduction goals. OMB’s January 1997 bulletin said that agencies should prepare and implement ICBs and streamlining plans that would achieve a 25-percent reduction by the end of fiscal year 1998. However, the agencies’ goals are actually established in the ICBs. OIRA’s practice of establishing agency-specific burden-reduction goals in those ICBs at the level that it and the agencies expect the agencies’ paperwork burden will be by the end of the fiscal year will not motivate the agencies to reduce their information collection requirements. A goal should represent a desired condition, not simply the condition that the participating parties expect will occur. Also, OIRA’s pattern in the past 2 years of publishing agency goals for the fiscal year within the last 2 months of the fiscal year makes the goals of limited value in the management of the agencies’ paperwork reduction efforts. This year, OIRA will again not publish agency-specific goals until late in the fiscal year. Finally, although OMB’s January 1997 bulletin said that each agency’s burden-reduction goal should be consistent with the governmentwide, 25 percent burden-reduction goal envisioned in the PRA, OIRA officials told us during this review that the agency and governmentwide goals are not necessarily linked. This position is illogical and appears inconsistent with the PRA’s legislative history. OIRA also has not fully satisfied either of the IRM-related responsibilities that we examined—developing a governmentwide IRM plan and periodically reviewing selected agency IRM activities. Although OIRA’s August 1996 and September 1997 reports on the PRA and the CIO Council’s strategic plan contain some of the elements that the PRA requires in an IRM strategic plan, none of these documents describe, in a clear and comprehensive manner, (1) the objectives and means by which the federal government should use information resources to improve agency and program performance or (2) agencies’ progress in applying IRM to improve their performance—two of the three basic elements that the act says an IRM strategic plan should have. Also, although OIRA desk officers and officials mentioned a number of actions that they had taken to review agencies’ IRM activities, none of those actions appeared to focus on the two specific IRM responsibilities that the PRA explicitly assigns to the agencies—the development of an IRM strategic plan and the development of a process to establish goals for improving IRM’s contribution to productivity, efficiency, and effectiveness. Finally, as we have previously testified, OIRA has not kept Congress and congressional committees fully and currently informed about certain major activities under the PRA. The establishment of burden-reduction goals was one of the key elements in the 1995 PRA, and OIRA has included information in its annual reports to Congress about the status of agencies’ burden-reduction efforts. However, OIRA has never directly informed Congress in its reports or elsewhere that the goals envisioned in the PRA are unlikely to be met, or that the agencies believe that the goals cannot be met given current statutory requirements. Neither has OIRA informed Congress that it believes that the total of the agency-specific goals do not have to equal the governmentwide goals. We believe that these are major activities under the act about which OIRA should have kept Congress and congressional committees fully and currently informed. Had OIRA informed Congress that the goals in the PRA were unlikely to be met given agencies’ statutory obligations, Congress could have used that information to determine whether it wanted to change the goals or to change the statutory requirements to allow the agencies to meet the PRA’s goals. Also, OIRA has not informed Congress that it has not developed an IRM strategic plan, or even that it believes its August 1996 and September 1997 reports to Congress and the CIO Council’s report represent a strategic plan. Finally, OIRA’s reports to Congress have not included several of the specific elements that the PRA requires OIRA to include in those reports. OIRA’s lack of action in some of these areas may be a function of its resource and staffing limitations. The office has less than two dozen staff who review between 3,000 and 5,000 PRA information collection requests each year, analyze the substance of about 500 significant rules each year under Executive Order 12866, and perform other duties pursuant to other statutes and executive orders. As a result, it may be difficult for OIRA officials and staff to carry out all of the specific tasks that the PRA requires it to take or to adopt a strategic view of information collection and information management. However, as we said in our 1983 report on the PRA, if resource limitations are the problem, OMB officials need to notify Congress of those limits in its budget submission. It has not done so. Through its oversight role, Congress can help ensure that OIRA carries out its statutory obligations under the PRA and plays the leadership role that the drafters of the PRA believed would be critical to the act’s success. Congress can exercise that oversight in any number of ways, including congressional hearings that focus directly on how well OIRA has carried out its responsibilities under the act. Another alternative is through the appointment and confirmation process in which the Senate has an opportunity to explore what prospective OMB and OIRA nominees plan to do to ensure stronger leadership and better compliance with the PRA’s requirements. Congress could also use its review of the annual performance plans and reports that OMB is required to submit under the Results Act as a means of overseeing how OIRA is carrying out its PRA responsibilities. However, for Congress to use OMB’s plans and reports in this manner, the documents must directly address OIRA’s PRA responsibilities. OMB’s performance plan would have to identify goals that relate to OIRA’s PRA responsibilities, identify the specific strategies and resources that it will use to achieve these performance goals, and develop measures that would inform Congress and the public about how well OMB is achieving these goals. Those performance goals would be specifically linked to program activities in OMB’s budget requests. If OIRA’s staffing and resource limitations prevent it from accomplishing its responsibilities, or if OMB believes that OIRA’s PRA responsibilities need modification, OMB can highlight those limitations and propose any statutory changes that it believes are necessary in its performance plan and its annual report. The Director of OMB should ensure that its annual performance plans and annual program reports to Congress pursuant to the Results Act identify specific strategies, resources, and performance measures that it will use to address OIRA’s specific PRA responsibilities. If the Director believes that OMB needs additional resources to carry out its PRA-related responsibilities, or that certain responsibilities or goals should be eliminated or revised, the Director should highlight those limitations and any proposed changes in the agency’s plans and reports. To improve the implementation of the PRA, Congress may want to use its oversight authority to help ensure that OIRA executes its responsibilities under the act. Specifically, Congress may want to focus part of its review of OMB’s annual performance plans and reports pursuant to the Results Act on OIRA’s statutory PRA obligations. We provided a draft of this report to OMB for review and comment. On June 11, 1998, we met with the Acting Administrator of OIRA to discuss the report; and on June 17, 1998, he provided a written summary of OMB’s comments. In that summary, the Acting Administrator said that because the report discusses only a few of OIRA’s responsibilities under the PRA, it does not accurately or fully portray the complexity and scope of these responsibilities. He also said that because the report does not include other responsibilities on which OIRA has taken action under the PRA, it does not accurately represent the extent to which OIRA has fulfilled most of these responsibilities. As a result, he said the report does not provide a complete, balanced, or accurate picture of how OIRA is carrying out its PRA responsibilities. For example, the Acting Administrator said the report suggests that OIRA has never directly informed Congress that the burden-reduction goals stated in the PRA are unlikely to be met, or that the agencies believe that the goals cannot be met given current statutory requirements. However, he noted that OMB has, for each year since the 1980 PRA, published an Information Collection Budget that sets forth the previous year’s baseline, the current year’s accomplishment, and the future year’s targeted goal for paperwork burdens. Moreover, he said OIRA has, for the past 3 years, informed Congress through formal and informal contacts that the general paperwork burden-reduction goals are unlikely to be met and that they could not be met given current statutory requirements. In addition, the Acting Administrator said the report suggests that agencies will not be motivated to meet the statutory governmentwide 10 and 5 percent annual burden-reduction goals because OIRA sets the agency-specific burden-reduction goals in the annual Information Collection Budget at the level that OMB and the agencies’ expect the agencies’ paperwork burden to be by the end of the upcoming fiscal year. He said that this conclusion does not take into account the fact that the PRA itself establishes the procedure under which OMB and the agencies establish their annual paperwork burden-reduction goals. Specifically, he said, the PRA directs OMB, in consultation with each agency, to set an annual agency goal to reduce information collection burdens that “represent the maximum practicable opportunity in each agency” that is “consistent with improving agency management of the process for review of collections of information” established by the agency’s Chief Information Officer. He said this means that each year each agency is to seek to attain the “maximum practicable” paperwork burden reduction consistent with the agency’s statutory and program missions and the information management strategy of the Chief Information Officer. The aggregate of each agency’s annual goals that is the “maximum practicable” in light of each agency’s programmatic and statutory responsibilities may not, and as a general matter has not, totaled to the governmentwide goal. The Acting Administrator also said this conclusion does not take into account the fact that agency information collections are largely driven by the need to carry out program and statutory missions. If an information collection that an agency submits for OMB review meets the practical utility, burden, and other PRA criteria for approval, he said that OMB does not have authority to disapprove it just because the approval would cause the agency to exceed the agency’s paperwork burden reduction goal stated in the Information Collection Budget. The Acting Administrator said another example involves OMB’s implementation of its IRM responsibilities. Under the 1980 and 1995 PRAs, IRM is the broad umbrella under which all of OMB’s PRA responsibilities are carried out. However, he said that the report’s conclusions appear to be based on a narrow reading of a particular section of the PRA, rather than on a broad reading of the PRA itself. He also said that OMB’s annual performance plans and reports already discuss OIRA’s PRA responsibilities and describe the targets by which OIRA’s attainment of those responsibilities will be met, and he said that OIRA has the resources adequate to meet its many responsibilities. Finally, he suggested a number of technical and clarifying changes in the report. In relation to the Acting Administrator’s first point regarding the scope of our review, we clearly stated in several places in the draft report, including the title, that the report discusses only selected OIRA responsibilities under the PRA. The “Objectives, Scope, and Methodology” section of the report says “e focused our review solely on OIRA’s implementation of the specific responsibilities delineated in the objectives. We did not examine the implementation of OIRA’s other PRA responsibilities, including its responsibilities in the areas of federal information technology, records management, and statistical policies.” The first sentence of the “Conclusions” section states that “his report examines some, but not all, of OIRA’s specific responsibilities under the 1995 PRA.” Also, in the “Background” section we noted that OIRA has many other statutory and executive order responsibilities related to regulatory management, and we specifically delineated OIRA’s responsibilities under UMRA, SBREFA, and other statutes. Therefore, we believe that the report makes clear its scope limitations, and it also provides the context needed to understand the complexity and breadth of the PRA responsibilities on which we focused. We have issued other reports and testimonies related to OIRA’s PRA responsibilities that were outside of the scope of this report and that criticized OIRA’s performance in those areas. Therefore, even if this report had been expanded to address these other responsibilities, there is no assurance that the report would have, as the Acting Administrator suggests, reached a different conclusion regarding the extent to which OIRA had fulfilled those responsibilities. For example, in September 1996 we reported that although OMB had taken some steps to improve information security, its oversight efforts were uneven and OMB “generally did not proactively attempt to identify and promote resolution of fundamental security program weaknesses that are likely to be at the root of these problems.” We have also previously reported concerns about OMB’s capacity to coordinate the budgets and statistical activities of the agencies in the federal statistical system. Furthermore, we believe that the PRA responsibilities on which we focused (e.g., establishment of burden-reduction goals and development of IRM strategic plans) are central to the successful implementation of the act. Within those areas, we believe that the report presents a complete, balanced, and accurate picture of OIRA’s actions and, in several cases, its lack of action. The Acting Administrator indicated that OMB’s Information Collection Budgets have kept Congress and congressional committees informed regarding progress toward the burden reduction goals in the PRA, and he said that OIRA has informed Congress through “formal and informal contacts” that the goals are unlikely to be met because of current statutory requirements. We called the Acting Administrator to determine what “formal and informal contacts” he was referring to in his comments, and he said that OIRA officials had told both majority and minority congressional staff that the PRA’s burden-reduction goals were unlikely to be met. However, he said that OIRA had never communicated that conclusion to Congress or congressional committees in any testimonies, letters, or other written documents. Also, although the ICBs in OIRA’s annual reports contain information on governmentwide progress toward the burden-reduction goals envisioned in the PRA, those documents do not clearly state that the goals are unlikely to be met or that existing statutory requirements are the reason. The Acting Administrator indicated that the PRA requires OIRA to set agency burden-reduction goals at the levels that the agencies believed represented their “maximum practicable opportunity,” and that these goals may not total to the governmentwide goal. We continue to believe that Congress, when it enacted the PRA, envisioned a relationship between the governmentwide goals and the agency specific goals. If OIRA believes that agencies’ statutory and program missions make achievement of these interrelated goals unattainable, or that the PRA’s requirements regarding governmentwide and agency specific goals are inconsistent, OIRA should notify Congress of its conclusions. To date, OIRA has not done so. We also continue to believe that agencies will not be motivated to improve their performance in reducing paperwork burden by OIRA’s practice of setting agency-specific goals after 10 months of the fiscal year have passed at a level that the agencies expect to reach within the next 2 months. The Acting Administrator’s statement regarding OMB’s inability to disapprove an agency’s proposed information collection simply because it may cause the agency to exceed its burden-reduction goal does not address our intended point. The PRA requires OIRA to set both governmentwide and agency specific burden-reduction goals. The establishment of those goals does not, in any way, inhibit OIRA’s ability to review and, if necessary, disapprove an agency’s proposed collection of information. Similarly, OIRA’s reviews of agencies proposed information collections does not inhibit its ability to establish burden-reduction goals. The act does not require agencies to meet the burden-reduction goals, only that OIRA and the agencies establish them. In another portion of his response, the Acting Administrator said that “nder the 1980 and 1995 PRAs, IRM is the broad umbrella under which all of OMB’s PRA responsibilities are carried out.” We agree that the 1995 PRA (but not the 1980 act) envisions IRM as a central focus of OIRA’s (and the agencies’) responsibilities under the act. Conceptually, all of OIRA’s responsibilities under the act can be viewed as IRM-related. However, we believe that the requirements that OIRA (1) develop and maintain a governmentwide IRM strategic plan; and (2) oversee, among other things, agencies’ development of their own IRM strategic plans are central elements of OIRA’s IRM responsibilities under the PRA. The act states that both the governmentwide and agency-specific IRM plans are supposed to describe how information resources help accomplish agencies’ missions. It is only within the context of these mission-related plans that the relevance and accomplishment of OIRA’s other conceptually-related IRM responsibilities can be assessed. Therefore, we focused on OIRA’s actions regarding these plans in this portion of our review. The Acting Administrator did not, in his response, dispute our conclusion that OIRA had not satisfied all of its responsibilities in this area. The Acting Administrator also said that OMB’s annual performance plans and reports under the Results Act already discuss OIRA’s PRA responsibilities and describe the targets by which OIRA’s attainment of those responsibilities will be met. As we point out in the report, OMB’s February 1998 performance plan under the Results Act does not identify the specific strategies and resources that it will use to achieve the performance goal to “work with agencies to reduce paperwork burdens.” Also, the plan does not provide performance measures that would allow Congress and the public to determine how well OMB is achieving these goals. Finally, OMB’s program performance reports under the Results Act are not due until March 31, 2000. Therefore, we did not change our recommendation. Finally, we accepted some, but not all, of the Acting Administrator’s technical and clarifying changes to the draft report. For example, at his suggestion, we noted that the Clinger-Cohen Act amended parts of the PRA. We also clarified the scope of some of the headings in the report. As agreed with your office, unless you publically announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the Ranking Minority Member of the Senate Committee on Governmental Affairs’ Subcommittee on Oversight, Restructuring and the District of Columbia; other interested committees; and the Director of OMB. We will also make copies available to others upon request. Major contributors to this report were Curtis Copeland, Assistant Director; and Elizabeth Powell, Evaluator-in-Charge. Please contact me at (202) 512-8676 if you have any questions. L. Nye Stevens Director, Federal Management and Workforce Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed how the Office of Information and Regulatory Affairs (OIRA) has implemented selected responsibilities assigned to it by the 1995 Paperwork Reduction Act (PRA), focusing on: (1) how OIRA reviews and controls paperwork; (2) OIRA's oversight of federal information resources management (IRM) activities; and (3) how OIRA keeps Congress and congressional committees fully and currently informed about major activities under the act. GAO noted that: (1) OIRA has taken between 3,000 and 5,000 actions on agencies' information collection requests in each year since the 1995 PRA was enacted; (2) at the same time, 20 to 25 OIRA staff members assigned to this task were responsible for reviewing the substance of about 500 significant rules each year and carrying out other statutory, executive order, and policy responsibilities; (3) although OIRA has provided agencies with some guidance on how they can estimate paperwork burden, the guidance is not very specific; (4) as required by the PRA, OIRA has set both governmentwide and agency-specific burden-reduction goals; (5) however, OIRA officials said they do not believe the act requires that the agencies' burden-reduction goals need to total to the governmentwide goal; (6) also, OIRA established the agencies' goals for fiscal years 1996 and 1997 at nearly the end of each of those years; (7) OIRA has not formally designated any pilot projects under the PRA to test alternative policies and procedures to minimize information collection burden; (8) OIRA officials said that other burden reduction efforts are under way, and pilot projects used to satisfy another statute meet the PRA's requirements; (9) OIRA's annual reports do not provide a central focus on how agencies should use information resources to improve agency and program performance, and they only partially describe agencies' progress in applying IRM to improve their performance and the accomplishment of their missions--elements that the PRA requires in a governmentwide IRM strategic plan; (10) however, the Office of Management and Budget (OMB) does not explicitly require agencies' information collection requests and budget submissions to contain all of the elements that the PRA specifically mentions as agencies' general IRM responsibilities; (11) OIRA officials said that they keep Congress and congressional committees fully and currently informed of major activities under the act through their annual reports, the Chief Information Officer Council's strategic plan, and other reports and informational mechanisms; (12) however, OIRA's and other reports do not contain all of the specific information that the act requires; and (13) although the annual reports present the changes in burden-hour estimates from year to year, OIRA has not clearly notified Congress in those reports or elsewhere that the burden reduction goals contemplated in the PRA are unlikely to be met, or that OIRA believes that the sum of the agency-specific goals need not equal the governmentwide goal, or that other PRA-required actions have not been taken. |
The September 11, 2001, terrorist attacks had a devastating effect on the U.S. financial markets with significant loss of life, extensive physical damage, and considerable disruption to the financial district in New York. Damage from the collapse of the World Trade Center buildings caused dust and debris to blanket a wide area of lower Manhattan, led to severe access restrictions to portions of lower Manhattan for days, and destroyed substantial portions of the telecommunications and power infrastructure that served the area. Telecommunications service in lower Manhattan was lost for many customers when debris from the collapse of one the World Trade Center buildings struck a major Verizon central switching office that served approximately 34,000 business and residences. The human impact was especially devastating because about 70 percent of the civilians killed in the attacks worked in the financial services industry, and physical access to the area was severely curtailed through September 13, 2001. Although most stock exchanges and clearing organizations escaped direct damage, the facilities and personnel of several key broker-dealers and other market participants were destroyed or displaced. Market participants and regulators acknowledged that the reopening of the stock and options markets could have been further delayed if any of the exchanges or clearing organizations had sustained serious damage. The stock and options exchanges remained closed as firms, that were displaced by the attacks attempted to reconstruct their operations and reestablish telecommunications with their key customers and other market participants. In the face of enormous obstacles, market participants, infrastructure providers, and the regulators made heroic efforts to restore operations in the markets. Broker-dealers that had their operations disrupted or displaced either relocated their operations to backup facilities or other alternative facilities. These facilities had to be outfitted to accommodate normal trading operations and to have sufficient telecommunications to connect with key customers, clearing and settlement organizations, and the exchanges and market centers. Some firms did not have existing backup facilities for their trading operations and had to create these facilities in the days following the crisis. For example, one broker-dealer leased a Manhattan hotel to reconstruct its operations. Firms were not only challenged with reconstructing connections to their key counterparties but, in some cases, they also had the additional challenge of connecting with the backup sites of counterparties that were also displaced by the attacks. The infrastructure providers also engaged in extraordinary efforts to restore operations. For example, telecommunications providers ran cables above ground rather than underground to speed up the restoration of service. By Friday September 14, 2001, exchange officials had concluded that only 60 percent of normal market trading liquidity had been restored and that it would not be prudent to trade in such an environment. In addition, because so many telecommunications circuits had been reestablished, market participants believed that it would be beneficial to test these telecommunications circuits prior to reopening the markets. Officials were concerned that without such testing, the markets could have experienced operational problems and possibly have to close again, which would have further shaken investor confidence. The stock and options markets reopened successfully on Monday, September 17, 2001 and achieved record trading volumes. Although the government securities markets reopened within 2 days, activity within those markets was severely curtailed, as there were serious clearance and settlement difficulties resulting from disruptions at some of the key participants and at one of the two banks that clear and settle government securities. Some banks had important operations in the vicinity of the attacks, but the impact of the attacks on the banking and payment systems was much less severe. Regulators also played a key role in restoring market operations. For example, the Federal Reserve provided over $323 billion in funding to banks between September 11 and September 14, 2001, to prevent organizations from defaulting on their obligations and creating a widespread solvency crisis. SEC also granted regulatory relief to market participants by extending reporting deadlines and relaxed the rules that restrict corporations from repurchasing their shares. The Department of the Treasury also helped to address settlement difficulties in the government securities markets by conducting a special issuance of 10-year Treasury notes. Although financial market participants, regulators, and infrastructure providers made heroic efforts to restore the functioning of the markets as quickly as they did, the attacks and our review of 15 key financial market organizations—including 7 critical ones—revealed that financial market participants needed to improve their business continuity planning capabilities and take other actions to better prepare themselves for potential disasters. At the time of the attacks, some market participants lacked backup facilities for key aspects of their operations such as trading, while others had backup facilities that were too close to their primary facilities and were thus either inaccessible or also affected by the infrastructure problems in the lower Manhattan area. Some organizations had backup sites that were too small or lacked critical equipment and software. In the midst of the crisis, some organizations also discovered that the arrangements they had made for backup telecommunications service were inadequate. In some cases, firms found that telecommunication lines that they had acquired from different providers had been routed through the same paths or switches and were similarly disabled by the attacks. The 15 stock exchanges, ECNs, clearing organizations, and payment systems we reviewed had implemented various physical and information security measures and business continuity capabilities both before and since the attacks. At the time of our work—February to June 2002—these organizations had taken such steps as installing physical barriers around their facilities to mitigate effects of physical attacks from vehicle-borne explosives and using passwords and firewalls to restrict access to their networks and prevent disruptions from electronic attacks. In addition, all 15 of the organizations had developed business continuity plans that had procedures for restoring operations following a disaster; and some organizations had established backup facilities that were located hundreds of miles from their primary operations. Although these organizations have taken steps to reduce the likelihood that their operations would be disrupted by physical or electronic attacks and had also developed plans to recover from such events, we found that some organizations continued to have some limitations that would increase the risk of their operations being impaired by future disasters. This issue is particularly challenging for both market participants and regulators, because addressing security concerns and business continuity capabilities require organizations to assess their overall risk profile and make business decisions based on the trade-offs they are willing to make in conducting their operations. For example, one organization may prefer to invest in excellent physical security, while another may choose to investment less in physical security and more in developing resilient business continuity plans and capabilities. Our review indicated that most of the 15 organizations faced greater risk of operational disruptions because their business continuity plans did not adequately address how they would recover if large portions of their critical staff were incapacitated. Most of the 15 organizations were also at a greater risk of operations disruption from wide-scale disasters, either because they lacked backup facilities or because these facilities were located within a few miles of their primary sites. Few of the organizations had tested their physical security measures, and only about half were testing their information security measures and business continuity plans. Securities and banking regulators have made efforts to examine operations risk measures in place at the financial market participants they oversee. SEC has conducted reviews of exchanges, clearing organizations, and ECNs that have generally addressed aspects of these organizations’ physical and information security and business continuity capabilities. However, reviews by SEC and the exchanges at broker-dealers generally did not address these areas, although SEC staff said that such risks would be the subject of future reviews. Banking regulators also reported that they review such issues in the examinations they conduct at banks. Regulators also have begun efforts to improve the resiliency of clearing and settlement functions for the financial markets. In August 2002, the Federal Reserve, Office of the Comptroller of the Currency, and SEC jointly issued a paper entitled the Draft Interagency White Paper on Sound Practices to Strengthen the Resilience of the U.S. Financial System. This paper sought industry comment on sound business practices to better ensure that clearance and settlement organizations would be able to resume operations promptly after a wide-scale regional disaster. The regulators indicated that the sound practices would apply to a limited number of organizations that perform important clearing functions, as well as to between 15 and 20 banks and broker-dealers that also perform clearing functions with sizeable market volumes. The regulators that developed the white paper appropriately focused on clearing functions to help ensure that settlement failures do not lead to a broader financial crisis. However, the paper did not similarly address restoring critical trading activities in the various financial markets. The regulators that developed the paper believed that clearing functions were mostly concentrated in single entities for most markets or in a very few entities for others and thus posed a greater potential for disruption. In theory, multiple stock exchanges and other organizations that conduct trading activities could substitute for each other in the event of a crisis. Nevertheless, trading on the markets for corporate securities, government securities, and money market instruments is also vitally important to the economy; and the United States deserves similar assurance that trading activities also would be able to resume when appropriate—smoothly and without excessive delay. The U.S. economy has demonstrated that it can withstand short periods during which markets are not trading. After some events occur, having markets closed for some limited time could be appropriate to allow emergency and medical relief activities, permit operations to recover, and reduce market overreaction. However, long delays in reopening the markets could be harmful to the economy. Without trading, investors lack the ability to accurately value their securities and cannot adjust their holdings. The September 11, attacks demonstrated that the ability of markets to recover could depend on the extent to which market participants have made sound investments in business continuity capabilities. Without clearly identifying strategies for recovery, determining the sound practices needed to implement these strategies, and identifying the organizations that could conduct trading under these strategies, the risk that markets may not be able to resume trading in a fair and orderly fashion and without excessive delays is increased. Goals and strategies for resuming trading activities could be based on likely disaster scenarios and could identify the organizations that are able to conduct trading in the event that other organizations could not recover within a reasonable time. Goals and strategies, along with guidance on business continuity planning practices, and more effective oversight would (1) provide market participants with the information they need to make better decisions about improving their operations, (2) help regulators develop sound criteria for oversight, and (3) assure investors that trading on U.S. markets could resume smoothly and in a timely manner. SEC has begun developing a strategy for resuming stock trading for some exchanges, but the plan is not yet complete. For example, SEC has asked the New York Stock Exchange (NYSE) and NASDAQ to take steps to ensure that their information systems can conduct transactions in the securities that the other organizations normally trade. However, under this strategy NYSE does not plan to trade all NASDAQ securities, and neither exchange has fully tested its own or its members’ abilities to trade the other exchanges’ securities. Given the increased threats demonstrated by the September 11 attacks and the need to assure that key financial market organizations are following sound practices, securities and banking regulators’ oversight programs are important mechanisms to assure that U.S. financial markets are resilient. SEC oversees the key clearing organizations and exchanges through its Automation Review Policy (ARP) program. The ARP program—which also may be used to oversee adherence to the white paper’s sound practices— currently faces several limitations. SEC did not implement this ARP program by rule but instead expected exchanges and clearing organizations to comply with various information technology and operations practices voluntarily. However, under a voluntary program, SEC lacks leverage to assure that market participants implement important recommended improvements. While the program has prompted numerous improvements in market participants’ operations, we have previously reported that some organizations did not establish backup facilities or improve their systems’ capacity when the SEC ARP staff had identified these weaknesses. Moreover, ARP staff continue to find significant operational weaknesses at the organizations they oversee. An ARP program that draws its authority from an issued rule could provide SEC additional assurance that exchanges and clearing organizations adhere to important ARP recommendations and any new guidance developed jointly with other regulators. To preserve the flexibility that SEC staff considers a strength of the current ARP program, the rule would not have to mandate specific actions but could instead require that the exchanges and clearing organizations engage in activities consistent with the ARP policy statements. This would provide SEC staff with the ability to adjust their expectations for the organizations subject to ARP, as technology and industry best practices evolve, and provide clear regulatory authority to require actions as necessary. SEC already requires ECNs to comply with ARP guidance; and extending the rule to the exchanges and clearing organizations would place them on similar legal footing. In an SEC report issued in January 2003, the Inspector General noted our concern over the voluntary nature of the program. Limited resources and challenges in retaining experienced ARP staff also have affected SEC’s ability to more effectively oversee an increasing number of organizations and more technically complex market operations. ARP staff must oversee various industrywide initiatives, such as Year 2000 or decimals pricing, and has also expanded to cover 32 organizations with more complex technology and communications networks. However, SEC has problems retaining qualified staff, and market participants have raised concerns about the experience and expertise of ARP staff. The SEC Inspector General also found that ARP staff could benefit from increased training on the operations and systems of the entities overseen by the ARP program. At current staff levels, SEC staff report being able to conduct examinations of only about 7 of the 32 organizations subject to the ARP program each year. In addition, the intervals between examinations were sometimes long. For example, the intervals between the most recent examinations for seven critical organizations averaged 39 months. | The September 11, 2001, terrorist attacks exposed the vulnerability of U.S. financial markets to wide-scale disasters. Because the markets are vital to the nation's economy, GAO's testimony discusses (1) how the financial markets were directly affected by the attacks and how market participants and infrastructure providers worked to restore trading; (2) the steps taken by 15 important financial market organizations to address physical security, electronic security, and business continuity planning since the attacks; and (3) the steps the financial regulators have taken to ensure that the markets are better prepared for future disasters. The September 11, 2001, terrorist attacks severely disrupted U.S. financial markets as the result of the loss of life, damage to buildings, loss of telecommunications and power, and restrictions on access to the affected area. However, financial market participants were able to recover relatively quickly from the terrorist attacks because of market participants' and infrastructure providers' heroic efforts and because the securities exchanges and clearing organizations largely escaped direct damage. The attacks revealed limitations in the business continuity capabilities of some key financial market participants that would need to be addressed to improve the ability of U.S. markets to withstand such events in the future. GAO's review of 15 stock exchanges, clearing organizations, electronic communication networks, and payments system providers between February and June 2002 showed that all were taking steps to implement physical and electronic security measures and had developed business continuity plans. However, some organizations still had limitations in one or more of these areas that increased the risk that their operations could be disrupted by future disasters. Although the financial regulators have begun efforts to improve the resiliency of clearance and settlement functions within the financial markets, they have not fully developed goals, strategies, or sound practices to improve the resiliency of trading activities. In addition, the Securities and Exchange Commission's (SEC) technology and operations risk oversight, which is increasingly important, has been hampered by program, staff, and resource issues. GAO's report made recommendations designed to better prepare the markets to deal with future disasters and to enhance SEC's technology and operations risk oversight capabilities. |
Substantial questions remain on the efficacy and potential environmental impacts of proposed geoengineering approaches, in part, because geoengineering research and field experiments to date have been limited. According to the experts we spoke with, research related to proposed SRM geoengineering approaches is sparse. According to recent studies, much of the research into SRM approaches to date has been limited to modeling studies to assess the effects of either injecting sulfur aerosols into the stratosphere or brightening clouds to reduce incoming solar radiation at the earth’s surface and produce a cooling effect. For example, one study found that combining a reduction of incoming radiation with high levels of atmospheric carbon dioxide could have substantial impacts on regional precipitation—potentially leading to reductions that could create drought in some areas. Based on our literature review and interviews with experts to date, only one study has been published for a field experiment related to SRM technologies—a Russian experiment that injected aerosols into the middle troposphere. : Update and Recommendations (Paris: 2007). commercial applications of technology exist for injecting and monitoring the long-term storage of carbon dioxide in geologic formations. The IEA stated that the oldest of these started as a private-sector project in 1996 and now continues under funding from the European Commission. However, these projects are primarily associated with public and private initiatives to study, develop, and promote carbon capture and storage technologies as a greenhouse gas emissions reduction strategy, rather than the large scale that would be required to significantly alter the climate through geoengineering. Similarly, some ocean fertilization experiments using iron have been conducted as part of existing marine research studies or small-scale commercial operations. One expert familiar with these experiments noted that, while they improved scientific understanding of the role of iron in regulating ocean ecosystems and carbon dynamics, they were not specifically designed to determine the implications of ocean fertilization with iron as a geoengineering approach for large-scale removal of carbon dioxide from the atmosphere. Due to the limited amount of geoengineering research conducted to date, the experts we interviewed stated that a sustained program of additional research would be needed to address the significant uncertainties regarding the effectiveness and potential impacts of geoengineering approaches. Additionally, these experts noted that for certain approaches where transboundary impacts would be likely during field experiments, international cooperation for research would be necessary. Specifically, recent studies highlight the limitations of current models to accurately predict the environmental impact of SRM technologies at a regional scale—which would be necessary to accurately gauge potential impacts that might interfere with agricultural production for certain regions. Furthermore, studies indicate that, even for the most tested methods applicable to geoengineering, such as geological sequestration and ocean fertilization with iron, uncertainties remain surrounding the potential cost, effectiveness, and impacts of pursuing these approaches at a scale sufficient to reduce the amount of carbon in the atmosphere. Due to the potential for disparities in environmental outcomes from using these technologies—similar to the expected regional variation in climate change impacts—experts that we spoke with said that the political, ethical, legal, and economic issues surrounding the potential impacts of geoengineering technologies warranted close examination. These experts generally agreed that the policy implications for SRM and CDR approaches were very different. For example, certain SRM approaches, such as atmospheric aerosol injection, are generally perceived as being less costly to implement and would act more quickly to reduce temperatures than CDR approaches. However, these approaches are also associated with a greater risk of environmental impacts that cross national boundaries— which would have political, ethical, legal, and economic ramifications. Furthermore, according to several of these experts, the policy implications of SRM approaches are complicated by the fact that there are likely to be both positive and negative outcomes for nations or regions, and that one nation, group, or individual could conceivably take unilateral action to deploy one of these technologies. Experts emphasized that it is important to begin studying how the United States and the international community might address the ramifications of unilateral deployment of an SRM approach that would result in gains for some nations and losses for others. In contrast, with the exception of ocean fertilization, two of the experts we interviewed stated that most CDR approaches, such as air capture, would have limited impacts across national boundaries and could, therefore, mostly involve discussions with domestic stakeholders about societal, economic, and political impacts similar to those of existing climate change mitigation strategies. However, the Royal Society study noted that large- scale deployment of CDR approaches such as widespread afforestation— planting of forests on lands that historically have not been forested—or methods requiring substantial mineral extraction—including land or ocean-based enhanced weathering—may have unintended and significant impacts within and beyond national borders. Our observations to date indicate that federal agencies such as DOE, National Science Foundation (NSF), U.S. Department of Agriculture (USDA), and others have funded some research and small-scale technology testing relevant to proposed geoengineering approaches on an ad-hoc basis. Some examples are as follows: For SRM approaches, DOE, through its Sandia National Laboratories, has sponsored a study investigating the potential unintended consequences and economic impacts of sulfur aerosol injection. Additionally, DOE has contributed a small amount of funding for modeling studies related to cloud-brightening and stratospheric aerosol SRM approaches at its Pacific Northwest National Laboratory—an effort that is primarily funded by the University of Calgary. For CDR approaches, DOE has sponsored research in both land-based and ocean-based carbon storage, including small-scale demonstration projects of geological sequestration as part of its Regional Carbon Sequestration Partnerships. In conjunction with other partners, DOE also provided funding for a study on carbon dioxide air capture technologies. NSF has funded projects relevant to both SRM and CDR approaches. For SRM approaches, NSF has sponsored some modeling studies for stratospheric aerosol injection and for a space-based SRM approach. NSF has also funded research investigating the ethical issues related to SRM approaches. For CDR approaches, NSF is supporting projects related to carbon storage in geological formations, saline aquifers, and biomass. Relevant to CDR approaches, USDA has supported research that examined land-based carbon storage approaches, such as biochar—a way to draw carbon from the atmosphere and sequester it in charcoal created from biomass—through its Agricultural Research Service, and carbon sequestration in soil and biomass as part of its Economic Re Service. National Aeronautics and Space Administration (NASA) funded a research study investigating the practicality of using a solar shield in space to deflect sunlight and reduce global temperatures as part of its former independent Institute for Advanced Concepts program. Additionally, scientists at NASA’s Ames Research Center, independent of headquarters, held a conference on SRM approaches in 2006, in conjunction with the Carnegie Institution of Washington. EPA has also sponsored research related to the economic implications of SRM geoengineering approaches through its National Center for Environmental Economics. In addition to these efforts, federal officials noted that a large fraction of the existing federal research and observations on basic climate change and earth science could be relevant to improving understanding about proposed geoengineering approaches and their potential impacts. For instance, according to federal officials, ongoing research conducted by USGCRP agencies related to understanding atmospheric circulation and aerosol/cloud interactions could help improve understanding about the potential effectiveness and impacts of proposed SRM approaches. Similarly, these officials said that basic research conducted by USGCRP agencies into oceanic chemistry could help address uncertainty about the potential effectiveness and impacts of CDR approaches, such as ocean fertilization. Staff from federal offices coordinating the U.S. response to climate change—CEQ, OSTP, and USGCRP—stated that they do not currently have a geoengineering strategy or position. Additionally, a USGCRP official stated that, while the USGCRP could establish an interagency working group to coordinate a federal effort in geoengineering research, such a group is not currently necessary because of the small amount of federal funding specifically directed toward these activities. In the event that the federal government decides to fund a coordinated geoengineering research strategy, our review of relevant studies and interviews with experts to date identified some key factors for policymakers to consider when designing a federal strategy for geoengineering research. For example, the Royal Society study noted that when there is a likelihood of transboundary impacts, such as the discussed SRM approaches, as well as one discussed CDR approach, ocean fertilization, transparency and international cooperation are key factors for pursuing geoengineering research. This point was reiterated by several experts at a recent panel discussion at the American Advancement for Science annual meeting. However, a couple of experts we interviewed noted that federal research for geoengineering approaches without likely transboundary impacts could be conducted independently of other countries, as is the case with the majority of currently proposed CDR approaches, such as air capture. Additionally, due to the variety of geoengineering approaches, several of the experts we interviewed recommended that federal geoengineering research should be an interdisciplinary effort across multiple agencies, and should be led by a multiagency coordinating body, such as OSTP or USGCRP. Recent GAO work offers insights on key considerations for assessing risk and managing technology-based research programs. For example, we have reported on the advantages of using a formal risk-management approach and applying an anticipatory perspective when making decisions under substantial uncertainty. Specifically, we reported that outlining the various alternative policy responses and the risks and uncertainties associated with pursuing each alternative is particularly important when prospective interventions require long lead times, high-stakes outcomes would likely result, and a delayed intervention would make impacts difficult to contain or reverse—conditions that could be considered relevant to the risks associated with climate change impacts. Furthermore, our review of DOE’s FutureGen project—a program that partners with the electric power industry to design, build, and operate the world’s first coal- fired, zero-emissions power plant—found that a comprehensive assessment of the costs, benefits, and risks of each technological option is an important factor when developing a strategic plan for technology-based research. Existing federal laws and international agreements were not enacted or negotiated with the purpose or intent to cover geoengineering activities, but according to legal experts and federal officials, several existing federal laws and international agreements could apply to geoengineering research and deployment, depending upon the type, location, and sponsor of the activity. Domestically, however, interviews with agency officials to date and our past work indicate that federal agencies have not yet assessed their statutory authority to regulate geoengineering activities, and those that have done so have identified regulatory gaps. Examples include the following: EPA has authority under the Safe Drinking Water Act to regulate underground injections of various substances and is using this authority to develop a rule that would govern the underground injection of carbon dioxide for geological sequestration, which could be relevant to future CDR approaches. EPA issued a proposed rule on geological sequestration in July 2008. EPA officials told us that the final rule is currently scheduled to be issued in the fall of 2010. However, as EPA officials noted, the rulemaking was not intended to resolve many questions concerning how other environmental statutes may apply to injected carbon dioxide, including the Comprehensive Environmental Response, Compensation and Liability Act of 1980 (CERLCA) and the Resource Conservation and Recovery Act of 1976 (RCRA), which apply to hazardous substances and wastes, respectively. The White House recently established an interagency task force on carbon capture and storage to propose a plan to overcome the barriers to widespread deployment of these technologies. The plan will address, among other issues, legal barriers to deployment and identify areas where additional statutory authority may be necessary. Under the Marine Protection, Research and Sanctuaries Act of 1972, as amended, certain persons are generally prohibited from dumping material, including material for ocean fertilization, into the ocean without a permit from EPA. Although EPA officials told us that the law’s ocean dumping permitting process is sufficient to regulate certain ocean fertilization activities, including research projects, they noted that the law was limited to disposition of materials for fertilization by vessels or aircraft registered in the United States, vessels or aircraft departing from the United States, federal agencies, or disposition of materials for fertilization conducted in U.S. territorial waters, which extend 12 miles from the shoreline or coastal baseline. Consequently, a domestic company could conduct ocean fertilization outside of EPA’s regulatory jurisdiction and control if, for example, the company’s fertilization activities took place outside U.S. territorial waters from a foreign-registered ship that embarked from a foreign port. Additionally, agency officials and legal experts noted that other laws such as the National Environmental Policy Act of 1969 (NEPA) could also apply to certain geoengineering activities. For example, NEPA requires federal agencies to evaluate the likely environmental effects of certain major federal actions by using an environmental assessment or, if the projects likely would significantly affect the environment, a more detailed environmental impact statement. A geoengineering activity could well constitute a major federal action requiring a NEPA analysis. Although some geoengineering approaches, such as geological sequestration of carbon dioxide in underground formations, would not involve international agreements because the activities and their effects would be confined to U.S. territory, other SRM and CDR approaches would. Legal experts we spoke with identified a number of existing international agreements that could apply to geoengineering activities but none directly address the issue of geoengineering. Our initial work indicates that parties to two international agreements have taken action to address geoengineering activities, but it is still uncertain whether and how other existing international agreements that legal experts have identified as potentially relevant could apply to geoengineering. In our work to date, legal experts have identified a number of existing international agreements, such as the 1985 Vienna Convention for the Protection of the Ozone Layer and the 1967 Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, that could be relevant for injection of sulfate aerosols into the stratosphere and placement in outer space of material to reflect sunlight, respectively. However, these agreements were not drafted with the purpose or intent of applying to geoengineering activities and the parties to those treaties have not determined whether or how the agreement should apply to relevant geoengineering activities. Moreover, once the parties make such determinations, they may have limited applicability because international agreements generally are only legally binding on countries that are parties to the agreement. For example, the 1996 Protocol to the Convention on the Prevention of Marine Pollution by Dumping of Wastes and Other Matter (also known as the London Protocol) generally prohibits the dumping of wastes or other matter into the ocean except for the wastes and matter listed in the London Protocol and for which a party to the agreement has issued a dumping permit that meets the Protocol’s permitting requirements. In 2006, the parties to the London Protocol agreed to amend the Protocol to include, in certain circumstances, geological sequestration of carbon dioxide in sub-seabed geological formations on the list of wastes and other matter that could be dumped. However, only the 37 countries that are a party to the London Protocol and who have not objected to the amendment would be legally bound by it. In two instances, the parties to international agreements have issued decisions but not amended the agreements regarding the agreement’s application to ocean fertilization, including research projects. Generally these decisions by the parties are not considered to be legally binding, although they would aid in interpreting the international agreement. Specifically, the two instances are: Over the course of the last 2 years, parties to the Convention on the Prevention of Marine Pollution by Dumping of Wastes and Other Matters and the London Protocol to the Convention have decided that the scope of these agreements include ocean fertilization activities for legitimate scientific research. Accordingly, they have asked the treaties’ existing scientific bodies to develop an assessment framework for countries to use in evaluating whether research proposals are legitimate scientific research and, therefore, permissible under the agreements. In addition, the parties have agreed that ocean fertilization activities other than legitimate scientific research are contrary to the aims of the agreements and should not be allowed. Meanwhile, the parties are considering a potentially legally binding resolution or amendment to the London Protocol concerning ocean fertilization. In 2009, the parties to the Convention on Biological Diversity issued a decision requesting that parties to the Convention ensure that ocean fertilization activities, except for certain small-scale scientific research within coastal waters, do not take place until there is an adequate scientific basis on which to justify such activities and a global, transparent, and effective control and regulatory mechanism is in place. The decision also urged the same from governments not party to the agreement. In our interviews with legal experts to date, they suggested that governance of geoengineering research should be separated from the governance of deployment because scientists and policymakers lack critical information about geoengineering that would inform governance of deployment. The legal experts we spoke with all agreed that some type of regulation of geoengineering field experiments was necessary, but had different views as to the structure of such regulation. For example, some suggested a comprehensive international governance regime for all geoengineering research with transboundary impacts, under the auspices of the United Nations Framework Convention on Climate Change or another entity, while others suggested that existing international agreements, such as the London Convention and Protocol, could be adapted and used to address the geoengineering approaches that fall within their purview. The scientific and policy experts we spoke with largely echoed the same themes and issues that the legal experts raised. Interviews with scientific experts to date suggest that governance issues related to geoengineering research with the potential for transboundary impacts should be addressed in a transparent, international manner in consultation with the scientific community. Some scientific and policy experts noted that the approach adopted by parties to the London Protocol engaged the scientific community about developing guidelines for assessing legitimate scientific research proposals that are not contrary to the treaties’ aims, rather than prohibiting the scientific research necessary to determine the efficacy and impacts of ocean fertilization. Regarding geoengineering deployment, some scientific and policy experts noted that similar to the difficulties presented by achieving international consensus in carbon mitigation strategies—where there are definite “winners and losers” in terms of economic and environmental benefits—establishing a governance regime over geoengineering deployment for certain approaches may be equally challenging due to questions about whether deployment is warranted, how to determine an appropriate new environmental equilibrium, and compensation for adverse impacts, among other issues. Mr. Chairman, this concludes my prepared statement. We look forward to helping this committee and Congress as a whole better understand this important issue. I would be pleased to respond to any questions that you or other members of the committee may have at this time. For further information about this testimony, please contact Frank Rusco, Director, Natural Resources and Environment at (202) 512-3841, or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Contributors to this testimony include: John Stephenson, Director; Tim Minelli, Assistant Director; Ana Ivelisse Aviles; Charles Bausell Jr.; Frederick Childers; Judith Droitcour; Lorraine Ettaro; Brian Friedman; Cindy Gilbert; Gloria Hernandezsaunders; Eric Larson; Eli Lewine; Madhav Panwar; Timothy Persons; Jeanette Soares; Joe Thompson; and Lisa Van Arsdale. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Key scientific assessments have underscored the urgency of reducing emissions of carbon dioxide to help mitigate potentially negative effects of climate change; however, many countries with significant greenhouse gas emissions, including the United States, China, and India, have not committed to binding limits on emissions to date, and carbon dioxide levels continue to rise. Recently, some policymakers have raised questions about geoengineering--large-scale deliberate interventions in the earth's climate system to diminish climate change or its potential impacts--and its role in a broader strategy of mitigating and adapting to climate change. Most geoengineering proposals fall into two approaches: solar radiation management (SRM), which offset temperature increases by reflecting a small percentage of the sun's light back into space, and carbon dioxide removal (CDR), which address the root cause of climate change by removing carbon dioxide from the atmosphere. Today's testimony focuses on GAO's preliminary observations on (1) the state of the science regarding geoengineering approaches and their effects, (2) federal involvement in geoengineering activities, and (3) the views of experts and federal officials about the extent to which federal laws and international agreements apply to geoengineering. To address these issues, GAO reviewed scientific literature and interviewed federal officials and scientific and legal experts. Substantial uncertainties remain on the efficacy and potential environmental impacts of proposed geoengineering approaches, because geoengineering research and field experiments to date have been limited. GAO's review of relevant studies and interviews with experts to date found that relatively few modeling studies for SRM approaches have been published, and only limited small-scale testing--primarily of carbon storage activities relevant to CDR approaches--have been performed. Consequently, the experts GAO spoke with stated that a sustained effort of coordinated and cooperative research would be needed to determine whether proposed geoengineering approaches would be effective at a scale necessary to reduce temperatures and to attempt to anticipate and respond to potential unintended consequences--including the political, ethical, and economic issues surrounding the use of certain approaches. Specifically, just as the effects of climate change in general are expected to vary by region, so would the effects of certain large-scale geoengineering efforts, therefore, potentially creating relative winners and losers and thus sowing the seeds of future conflict. Federal agencies have funded some research and small demonstration projects of certain technologies related to proposed geoengineering approaches; but these efforts have been limited, fragmented, and not coordinated as part of a federal geoengineering strategy. Officials from interagency bodies coordinating the federal response to climate change stated that their offices (1) have not developed a coordinated research strategy, (2) do not have a position on geoengineering, and (3) do not believe is it necessary to coordinate efforts due to the limited federal investment to date. In the event that the federal government decides to expand geoengineering research, GAO's interviews with experts suggest that transparency and international cooperation are key factors for any geoengineering research that poses a risk of environmental impacts beyond our borders. Further, GAO's past work indicates that a comprehensive assessment of costs and benefits that includes all relevant risks and uncertainties is a key component in strategic planning for technology-based research. According to legal experts and federal agency officials, some existing federal laws and international agreements could apply to geoengineering research and deployment. However, some federal agencies have not yet assessed their authority to regulate geoengineering, and those that have done so have identified regulatory gaps. Although legal experts have identified some relevant international agreements and parties to two agreements have taken actions to address geoengineering, it is not certain whether and how other agreements would apply. Most scientific and legal experts GAO spoke with distinguished the governance of research from governance of deployment and noted that governance of geoengineering research with transboundary impacts, such as SRM approaches, should be addressed at the international level in a transparent manner and in consultation with the scientific community. However, the experts' views on the details of governance varied. |
Long-term care includes many types of services needed when a person has a physical or mental disability. Individuals needing long-term care may have difficulty performing some activities of daily living (ADL) without assistance, such as bathing, dressing, toileting, eating, and moving from one location to another. They may have mental impairments, such as Alzheimer’s disease, that necessitate supervision to avoid harm to themselves or others or require assistance with tasks such as taking medications. Although a chronic physical or mental disability may occur at any age, the older an individual becomes, the more likely a disabling condition will develop or worsen. Nearly one-seventh of the nation’s current elderly population—an estimated 5.2 million—have a limitation in either ADLs; instrumental activities of daily living (IADL) such as preparing food, doing housekeeping, and handling finances; or both. More than one-third of these people have limitations in two or more ADLs. Long-term care encompasses a wide array of care settings and services, not only institutional care provided by nursing homes for individuals with more extensive care needs but also home and community-based care. Nearly 80 percent of the elderly requiring assistance with ADLs or IADLs live at home or in community-based settings, while more than 20 percent live in nursing homes or other institutions. The majority of long-term care is provided by unpaid family caregivers to elderly individuals living either in their own homes or with their families. However, a growing minority of the elderly receives paid assistance from various sources. For example, state Medicaid programs have increased significantly the number of beneficiaries receiving in-home or community services. In addition, alternatives to nursing home care, such as assisted-living arrangements, are developing that have long-term care services available. Long-term care needs are an especially significant concern for women. Women represent 7 of 10 unpaid caregivers, three-quarters of nursing home residents 65 years and older, and two-thirds of home health care users. Given their longer life expectancies and the fact that married women usually outlive their spouses, many women face a greater risk of needing long-term care by a paid caregiver. The baby boom generation, about 76 million people born between 1946 and 1964, will contribute significantly to the growth in the number of elderly individuals who need long-term care and in the amount of resources required to pay for it. The oldest baby boomers are now in their fifties. In 2011, the first of the baby boomers born in 1946 will turn 65 years old and become eligible for Medicare. The Medicaid program, which pays for many health care services for low-income elderly, including nursing home care, will also begin to be affected. Baby boomers are likely to have a disproportionate effect on the demand for long-term care because more are expected to live to advanced ages, when need is most prevalent. The first baby boomers reach age 85 in 2030. In 2000, individuals aged 65 or older made up 12.7 percent of our nation’s total population. By 2020, that percentage will increase by nearly one- third to 16.5 percent--one in six Americans--and will represent nearly 20 million more seniors than there are today. By 2040, the number of seniors aged 85 years and older will more than triple to 14 million (see fig. 1). long-term care may have increased. Others contend that better treatment and prevention could decrease the time period at the end of life when long-term care is needed. Baby boomers may also have a disproportionate effect on the demand for paid services. Many baby boomers will have fewer options besides paid long-term care providers because a smaller proportion of this generation may have a spouse or adult children to provide unpaid caregiving. This likelihood stems from the geographic dispersion of families and the large percentage of women who work outside the home, which may reduce the number of unpaid caregivers available to elderly baby boomers. In 1999, spending for nursing home and home health care was about $134 billion. Individuals needing care and their families paid for almost 25 percent of these expenditures out-of-pocket, public programs (predominantly Medicaid and Medicare) funded 61 percent, private insurance (including long-term care insurance as well as services paid by traditional health insurance) accounted for about 10 percent, and other private sources paid the remaining 5 percent (see fig. 2). These amounts, however, do not include the many hidden costs of long-term care. For example, they do not include wages lost when an unpaid family caregiver takes time off from work to provide assistance. An estimated 60 percent of the disabled elderly living in communities rely exclusively on their families and other unpaid sources for their care. Medicaid, a joint federal-state health financing program for low-income individuals, continues to be the largest public funding source for long-term care. Within broad federal guidelines, states design and administer Medicaid programs that include coverage for certain mandatory services, such as skilled nursing facility care, and other optional coverage, including home and community-based services. Long-term care services under Medicaid are not limited to adults—about 1 million children with special needs also receive long-term care services from Medicaid. Although most Medicaid long-term care expenditures are for nursing home care, in the last two decades the proportion of expenditures for home and community- based care has increased. By fiscal year 1998, the number of Medicaid recipients receiving home health or home and community-based services was similar to the number of Medicaid recipients receiving nursing facility services. How much service Medicaid provides varies among states, and Medicaid financing can be vulnerable to shifts in state revenues. State Medicaid programs have, by default, become the major form of insurance for long-term care. About two-thirds of nursing home residents in 1998 relied on Medicaid to help pay for their care, but Medicaid provides insurance only after individuals have become nearly impoverished by “spending down” their assets. Medicaid eligibility for many elderly persons results from having become poor as the result of depleting assets to pay for nursing home care, which costs an average of $55,000 per year. must have less than $2,000 in countable assets to become eligible for Medicaid coverage. An overall increase in wealth among the elderly means that a smaller proportion of elderly individuals will initially qualify for Medicaid—and others will need to become impoverished before they qualify. MetLife Mature Market Institute survey, 2000. This survey also found that nursing home costs vary widely by geographic region, from nearly $33,000 per year in Hibbing, Minnesota, to more than $100,000 per year in the Borough of Manhattan in New York City. number and costs of eligible individuals served under Medicaid in home and community-based settings. All states now have home and community- based waivers, and more than 200 waiver programs served more than 450,000 individuals nationwide in fiscal year 1998. Medicaid expenditures for home and community-based waivers increased an average of 29 percent per year from 1988 to 1999, reaching over $10 billion in 1999. The extent of services provided varies considerably among the states. Medicaid per capita expenditures for home care in 1999 ranged from a low of about $8 in Mississippi to a high of nearly $230 in New York. Medicaid is a significant share of state budgets—comprising 20 percent on average. Dependence on state budgets makes Medicaid financing vulnerable to states’ fiscal health. States generally must maintain balanced budgets without deficits, and their revenues often decline in periods of low or negative economic growth. A recent fiscal survey of states showed that about one-half of states are expecting declines in revenue growth for 2001 to 2002, and a few states are reducing current-year appropriations and making other adjustments to maintain balanced budgets. At the same time, one- half of the states estimate that Medicaid spending will exceed their current projections. With declining revenue and increasing Medicaid expenditures, maintaining balanced budgets in states may require constraining Medicaid expenditures, including the large share represented by long-term care services. While Medicare primarily covers acute care, in the early 1990s it also became a de facto payer for some long-term care services. However, as spending for both skilled nursing facility services and home health care became the fastest growing components of Medicare, the Congress in the Balanced Budget Act of 1997 (BBA) introduced new payment systems for nursing facilities and home health providers to control this spending. care costs and therefore limits its nursing home coverage to short-term, post-acute stays of up to 100 days per spell of illness following hospitalization. Medicare nursing home spending increased from $1.7 billion in 1990 to $10.4 billion in 1998 and declined to $9.6 billion in 1999. Since 1989, Medicare became a significant funding source of home care, financing $8.7 billion in care in 1999—or more than one-fourth of the home care purchased for the elderly. Court decisions and legislative changes in coverage essentially transformed the Medicare home health benefit from one focused on patients needing acute, short-term care after hospitalization to one that primarily served chronic, long-term care patients. By 1994, only about one-fourth of home health visits covered by Medicare occurred within 60 days following a hospitalization. As a result, Medicare, on a de facto basis, financed an increasing amount of long-term care through its home health care benefit. Both the number of beneficiaries receiving home health care and the number of visits per user more than doubled from 1989 to 1996. From 1990 to 1997, the average annual growth rate for Medicare home health care spending was 25.2 percent—more than 3 times the growth rate for Medicare spending as a whole. This increase in the use of these services cannot be explained by any increase in the incidence of illness among Medicare beneficiaries. In response to concerns about the growth in spending for Medicare services, including skilled nursing facility and home health services, the BBA included provisions to slow Medicare spending growth. The BBA required prospective payment systems (PPS) to be implemented for Medicare services provided through home health care agencies and skilled nursing facilities, replacing retrospective, cost-based reimbursement systems that did not provide adequate incentives to control costs. The skilled nursing facility PPS began to be implemented in July 1998 and will be completely phased in this year. See Medicare Home Health Care: Prospective Payment System Could Reverse Recent Declines in Spending (GAO/HEHS-00-176, Sept. 8, 2000). payments with patient needs. home health visits per user than those currently being provided. As a result, the PPS can support a large expansion of services. However, PPS incentives are intended to reward efficiency and control use of services. Because criteria for what constitutes appropriate home health care do not exist, it may be difficult for Medicare to ensure that patients receive all necessary services. How home health agencies respond to the PPS and its incentives could have major implications for the amount of future Medicare funding for home health care, the services provided, and whether Medicare remains a significant payer of long-term care. Many baby boomers will have more financial resources in retirement than their parents and may therefore be better able to absorb some long-term care costs. However, long-term care will represent a catastrophic cost for a relatively small portion of families. Private insurance can provide protection for such catastrophes because it spreads the risk among larger numbers of persons. Private long-term care insurance has been viewed as a means of both reducing potential catastrophic financial losses for the elderly and relieving some of the financing burden now shouldered by public long-term care programs. Some observers also believe private long- term care insurance could give individuals a greater choice of services to satisfy their long-term care needs. However, less than 10 percent of elderly individuals and even fewer near-elderly individuals (those aged 55 to 64) have purchased long-term care insurance. The National Association of Insurance Commissioners’ (NAIC) most recent data show that approximately 4.1 million persons were insured through long-term care policies in 1998, compared with 1.7 million persons in 1992. about two-thirds of the elderly—about 23 million individuals—have private Medicare supplemental (Medigap) insurance policies for non- Medicare-covered expenses such as copayments, deductibles, and prescription drugs. See Medicare: Refinements Should Continue to Improve Appropriateness of Provider Payments (GAO/T-HEHS-00-160, July 19, 2000). The accuracy of these policy numbers is dependent upon the accuracy of the information filed by the insurers themselves with the NAIC. boomers continue to believe they will never need such coverage. A recent survey of the elderly and near elderly found that only about 40 percent believed that they or their families would be responsible for paying for their long-term care. Some mistakenly believed that public programs, including Medicaid and Medicare, or their own health care insurance would provide comprehensive coverage for the services they need. This low perceived need for protection decreases demand for long-term care insurance. People also may be concerned about whether they can afford such insurance now or in the future when their premiums may increase and their retirement incomes may have decreased. Some employers offer employees a voluntary group policy option for long- term care insurance, but this market remains small and includes predominantly large employers. Usually employers do not pay for any of the costs of these policies, but group policies have lower administrative costs than individually purchased policies, which can result in lower premiums. One study estimated that 6 to 9 percent of eligible employees took advantage of employer-provided group long-term care insurance where it was available. Last year, the Congress passed legislation to offer unsubsidized, optional group long-term care insurance to federal employees and retirees beginning by fiscal year 2003. This initiative will likely establish the largest group offering of long-term care insurance and could significantly expand this market. A qualified long-term care insurance plan is defined as a contract that covers only long-term care services; does not pay for services covered under Medicare; is guaranteed to be renewable; does not provide for a cash surrender value or other money that can be paid, assigned, pledged as collateral for a loan, or borrowed; applies all refunds of premiums and all policyholder dividends or similar amounts as a reduction in future premiums or to increase future benefits; and meets certain consumer protection standards. Also, payments received from a qualified plan are considered medical expenses and are excluded from gross income for determining income taxes. Per diem policies that pay on the basis of disability rather than reimbursing for services used are subject to a cap of $180 per day per person in 1998. Out-of-pocket expenses for long-term care are allowed as itemized deductions along with other medical expenses if they exceed 7.5 percent of adjusted gross income. of these NAIC long-term care insurance standards. These three standards require policies to (1) not make prior hospitalization a condition for coverage, (2) have an outline of the coverage the policy provides, and (3) be guaranteed to be renewable and noncancelable except for nonpayment of premiums. In addition, all but one state adheres to the NAIC definition of long-term care insurance (policies that provide coverage for at least 12 months for necessary services provided in settings other than acute-care hospital units), and all but two states adhere to the preexisting conditions standard. Overall, HIAA identified 14 NAIC provisions specified for long- term care policies to be tax-qualified under HIPAA that had been adopted by at least 35 states as of July 1998. Many elderly and near-elderly individuals question the affordability and the value of long-term care insurance relative to the premiums charged. Long-term care insurance costs vary depending on the policyholder’s age at the time of purchase, optional benefits and terms selected, and the insurer. Premiums for a 65-year-old are typically about $1,000 per year and can be much higher for more generous coverage or for older buyers. The affordability of long-term care insurance determines to a great extent its market and is a key factor in individuals’ decisions to purchase and retain a long-term care insurance policy. Although assessing whether individuals can afford a policy is subjective, some studies estimate that long-term care insurance is affordable for only 10 to 20 percent of the elderly. Affordability is even more of an issue for married couples, who must each purchase individual coverage. While some insurers offer discounts to married couples when both purchase long-term care coverage, elderly couples are still likely to pay at least several thousand dollars annually for long-term care coverage. Those who consider and decide against purchasing long-term care insurance say they are skeptical about whether private policies will give adequate coverage. Those who do find long-term care insurance affordable when purchased may later decide it is not if their financial circumstances change or the premiums increase. An industry group estimates that 55 to 65 percent of all long-term care insurance policies sold as of June 1998 remain in force. purchased when he or she was 65, and about 6 to 10 times higher than if the policy had been purchased at age 50. Unfamiliarity with the concept and uncertainty of the value of long-term care insurance may deter some people from purchasing a policy. A relatively low premium at age 45 may nonetheless seem high for a risk that may not be realized for 40 years. Concerns about the cost of premiums relative to the value of policies may be a factor deterring purchases, especially when premiums for a similar policy for the same individual can vary widely. For example, a 65-year-old in Wisconsin can pay $857 to $2,061 per year for a long-term care insurance policy depending on the carrier, even if the terms are similar. Consumers deserve complete and accurate information about any insurance product that they purchase, and sales of long-term care policies are not likely to increase significantly unless consumers are given adequate and understandable information to assess them. If long-term care insurance is to help address the baby boom generation’s future long- term care needs, individuals must understand what they are buying and what future changes, if any, they may face in their policy’s coverage or premiums. While NAIC’s model standards have helped address prior deficiencies in the terms of long-term care policies, whether these have been sufficient to assure consumers that long-term care products are reliable and the terms of the products are easily understood and will be fulfilled. Recently, NAIC further amended its models in response to concerns about dramatic premium increases that some long-term care policyholders experienced. Annual premiums for individual basic long-term care insurance policies marketed in Wisconsin as of October 1999, with a $100 per-day nursing home benefit, $50 per-day home health benefit, lifetime benefits, a 90- or 100-day elimination period, and no optional benefits. In 1993, we reported on a number of problems in the long-term care insurance market, including those related to disclosure standards, inflation protection options, clear and uniform definitions of services, eligibility criteria, grievance procedures, nonforfeiture of benefits, options for upgrading coverage, and sales commission structures that potentially created incentives for marketing abuses. See Health Care Reform: Supplemental and Long-Term Care Insurance (GAO/T-HRD-94-58, Nov. 9, 1993). than 700 percent—even though they believed that their premiums would not increase as long as they held their policies. In states that adopt the new NAIC model amendments, insurers will have to provide written information to prospective purchasers explaining that a policy’s premium may increase in the future, why premium increases may occur, what options a policyholder has in the event of an increase, and the 10-year rate history for their policies. In states that adopt the model, consumers will also have to specifically acknowledge that they understand their policy’s premiums may increase, and insurers must explain any contingent benefit available to policyholders who let their policies lapse because of a substantial rate increase. Additionally, NAIC adopted amendments to better ensure that long-term care insurers price their policy premiums to be sufficient over the lifetime of the policy, so as to minimize the need for future premium increases. As a further consumer protection, these amendments require insurers to reimburse policyholders when any rate increase is found to be unnecessary and allow state insurance commissioners to ban an insurer from the long-term care market if the insurer has a pattern of offering initial policy purchasers inadequate premium rates. For the new NAIC model provisions to become effective, states must choose to adopt them as part of their statutes or regulations. An NAIC official reported that some states have begun considering legislation or regulations reflecting the revised NAIC models but that states will vary in whether and how quickly they adopt particular portions. The aging of the baby boomers will greatly increase the nation’s elderly population in the next 3 decades and thus increase the population who need long-term care services. The need for these services will become more critical after 2030, when this population reaches age 85 and older, which is the age group with the greatest need for long-term care. Recent legislation authorizing a new federal employees’ long-term care insurance offering and proposals that would establish new tax subsidies for the purchase of private long-term care insurance aim to increase the role private insurance plays in financing long-term care. Increased confidence in long-term care insurance and the availability of affordable, reliable products are also crucial components of private insurance if it is expected to play a larger role in financing future generations’ long-term care needs. Chairman Grassley and Ranking Member Baucus, this concludes my prepared statement. I will be happy to answer any questions that you or Members of the Committee have at this time. For more information regarding this testimony, please contact Kathryn G. Allen at (202) 512-7118 or John Dicken at (202) 512-7043. Opal Winebrenner and Carolyn Yocom also made key contributions to this statement. | The confluence of the aging baby boom generation, longer life expectancies, and evolving options for providing and financing long-term care services will require substantial public and private investment in long-term care and the development of sufficient capacity to serve this growing population. Spending for long-term care was about $134 billion in 1999. Medicaid and Medicare paid for nearly 58 percent of these services, contributing about $59 billion and $18 billion, respectively. Private long-term care insurance was viewed as a possible way to reduce catastrophic financial risk for the elderly needing long-term care and to relieve some of the financing burden now shouldered by public long-term care programs. Yet private insurance represents only about 10 percent of long-term care spending. Questions remain about the affordability of policies and the value of the coverage relative to the premiums charged. Although many states have adopted standards for long-term care policies, it is uncertain whether these standards have bolstered consumer confidence in the reliability of long-term care insurance. If long-term care insurance is to have a more significant role in addressing the baby boom generation's upcoming chronic health care needs, consumers must view the policies being offered as reliable, affordable products with benefits and limitations that are easy to understand. |
For the period 1991 through 1996, 49 countries were assessed about $688 million. The United States’ share was about $174 million, or about 25 percent of the total. Appendix I shows the assessments, payments, and outstanding balances for all 49 countries as of January 31, 1997. For the 3-year period 1997 through 1999, the Parties to the Protocol approved $466 million in new assessments. The United States’ share of the new assessment is about $39 million for each of these 3 years. For 1997, the list of contributors has been reduced to 34 countries primarily because countries that had not ratified the 1990 amendments (which established the Fund) were deleted from the list of contributors. There is substantial variability among the countries in paying their assessments. As of January 31, 1997, 22 countries had fully paid their assessments for 1991 through 1996, 13 countries (including the United States) had paid most of their assessments, 3 had paid less than half, and 11 had not paid anything. The countries that had not paid any part of their assessments were primarily those of the former Soviet Union. These countries, because of their economic position, are referred to as “economies in transition.” While not eligible for funding support from the Multilateral Fund, they are eligible to receive assistance for their efforts to phase out ozone-depleting substances from the Global Environment Facility. Cumulatively, as of January 31, 1997, of the $688 million assessed for the period 1991 through 1996, about $550 million, or about 80 percent, had been paid. Countries can pay their assessed contributions with cash or promissory notes, or by providing bilateral assistance to recipient countries. In 1993, the Parties to the Montreal Protocol agreed that promissory notes would be acceptable as payment of a country’s contribution to the Fund. Since that time, five countries have used promissory notes to pay at least part of their assessments, deferring the actual outlay of cash. In essence, these contributors benefit from the time value of money between the date a note is provided to the Fund and the date it is cashed. While the Fund can cash promissory notes at any time to meet its needs, the general practice is to cash the notes in six equal payments over a 3-year period. We estimate that based on current U.S. Treasury borrowing rates and the Fund’s general practice for cashing the notes over time, the U.S. government could save between $2 million and $3 million on each of its annual contributions to the Multilateral Fund by using the promissory notes. Although the U.S. government makes its payments to the Fund in cash, the Department of the Treasury currently administers over 10 international accounts using letters of credit, which also defer payments, similar to promissory notes. Since the establishment of the Multilateral Fund in 1991 through May 1997, the Fund’s Executive Committee has approved a total of 1,810 projects in more than 100 countries and allocated about $570 million to fund these projects. The geographical distribution of projects supported by the Multilateral Fund shows that the Asia and Pacific region has both the largest number of approved projects, 826, and the greatest share of approved funding—over $330 million or almost 60 percent of the total funding approved. China has been the Fund’s largest recipient with almost $150 million, or 26 percent, of all approved funding. The dominant share of projects and approved funding represented by the Asia and Pacific region is explained by the region’s rapidly expanding economies and population and by its current consumption and enormous potential for the use of ozone-depleting substances. Six of the 10 top recipients of aid from the Multilateral Fund are countries in the Asia and Pacific region, which together have been allocated nearly 50 percent of the total approved funding. The Latin American and Caribbean region ranks next with 473 projects having total approved funding of almost $130 million or nearly 23 percent of all funding approved to date, followed by the Africa region with 320 projects and approved funding of about $67 million. Europe has the smallest number of projects—56—with approved funding of about $21 million. This reflects the fact that relatively few European countries are eligible for assistance. The Fund also supports a category of projects, known as global projects, that transcend regional boundaries. As of May 31, 1997, the Fund had approved 135 global projects with a total allocation of almost $22 million. The table below shows funding for the top 10 recipient countries; appendix II provides a breakdown of approved funding by regions and by types of projects. There are seven broad purposes or categories for which the projects have been funded: (1) country program preparation, (2) institutional strengthening, (3) technical assistance, (4) training, (5) demonstration projects, (6) project preparation, and (7) investment projects. Preparation of a country program is generally the starting point for a country that is seeking the Fund’s assistance in converting to non-ozone-depleting technologies. A country program sets out a country’s strategy for phasing out ozone-depleting substances. It provides basic information on the use of ozone-depleting substances, the institutional framework for controlling them, relevant industry and government involvement, an action plan with time frames and budgets, and a list of specific projects requiring financial support from the Multilateral Fund. To date more than $7 million has been approved for the preparation of 108 country programs. Institutional strengthening projects build a country’s capacity to phase out ozone-depleting substances. The establishment of a national ozone unit within the country’s national government is frequently a key element of this activity with the goal of satisfying the basic need for institutional, legal, and regulatory capacity to support the implementation of national phaseout plans. As of the most recent meeting of the Fund’s Executive Committee (May 1997), a total of 97 institutional strengthening projects had been approved in 81 recipient countries with a total approved funding of slightly more than $15 million. Technical assistance, training, and demonstration projects constitute vehicles for transferring state-of-the-art technologies to recipient countries to help them meet their phaseout obligations under the Montreal Protocol. As of May 31, 1997, 394 demonstration, technical assistance, and training projects had been approved by the Fund’s Executive Committee, with a combined approved funding level of over $60 million. Project preparation, which involves developing projects for conversion from ozone-depleting to ozone-benign technologies, is an important prerequisite for investment projects. As of May 31, 1997, the Multilateral Fund had approved a total of 383 project preparation activities with a total approved funding level of over $30 million. Project preparation activities typically result in the development of a group of investment project proposals. Investment projects are the largest category of projects and the most important from the standpoint of protecting the stratospheric ozone layer. These projects, which account for slightly over 80 percent of total approved funding, assist business entities in recipient countries in converting domestic and commercial refrigeration, manufacturing, firefighting, and other economic sectors from processes that use ozone-depleting substances to technologies and products that are not ozone-depleting or are at least significantly less so. A typical investment project in the refrigeration sector, for example, may involve eliminating CFCs in the manufacture of domestic refrigerators and freezers. It may also include conversion to CFC-free technology in the manufacture (or “blowing”) of the polyurethane foam used in insulating the refrigerators and freezers. To date, 813 investment projects, with funding allocations of about $458 million, have been approved in 55 countries in all major regions of the world. When fully implemented, projects approved to date are expected to phase out the annual use of almost 84,000 metric tons of ozone-depleting potential. This is about 40 percent of the estimated ODP-weighted consumption of ozone-depleting substances in Article 5 countries. Appendix IV provides a breakdown of ODP metric tons to be phased out by type of project and by implementing agency as of May 31, 1997. As of December 31, 1996, however, only 20,487 ODP metric tons had actually been phased out. This difference is attributable to two factors. First, because of time lags between project approvals and the start of project implementation, the number of projects actually in progress or completed at a particular point in time is significantly smaller than the total number of projects approved by the Multilateral Fund’s Executive Committee. Second, projects that are under way, particularly investment projects, often take longer to complete than originally projected. As of December 31, 1996, only 688, or 45 percent of the 1,537 projects approved since 1991 had been completed. Of the approved funding of $485 million for these projects, only $197 million (41 percent) had been disbursed, leaving an undisbursed balance of about $288 million for completion of those projects. Planned spending commitments by the four implementing agencies in 1997 total about $128 million, meaning that less than half of the undisbursed funds approved for projects through 1996 is expected to be disbursed by the end of 1997. Some of the reasons cited for delays in project implementation and completion include the following: Recipients attempted to renegotiate projects after Executive Committee approval. The business entity needed more time to secure financing from counterparts. The grant recipients decided to change project specifications. The business entity chose to delay conversion until competitors’ projects were approved by the Executive Committee. The business entity wanted government regulations passed before allowing implementation to proceed. The bidding process resulted in higher costs than budgeted for the project. Delays in the start of projects and slower than anticipated progress once they have begun have concerned the Multilateral Fund’s Executive Committee. In May 1997, the Executive Committee required the implementing agencies to submit reports, by the next meeting of the Executive Committee, for projects (1) where no disbursement has occurred for 18 months after project approval and (2) that remained uncompleted 12 months after the prescribed completion date. Information from these reports will be used to develop guidelines to ensure that the project preparation process includes measures to prevent delays in implementation or completion in the future. The Executive Committee also decided projects that have had their funding requests significantly reduced during the review process could not proceed until the intended recipients confirm that they have additional funding available to allow for prompt project implementation. Summary data on project type, approved funding, and project status are detailed in appendix III. The Multilateral Fund has a number of mechanisms in place that are designed to ensure that funds are properly accounted for and that the amount of funds allocated to specific projects is reviewed and verified. When it was established in 1991, the Fund accepted the accounting and auditing mechanisms of the implementing agencies and relied primarily on the implementing agencies’ long-established institutional procedures. According to a 1995 report by COWIconsult, an international consultant, the implementing agencies have elaborate procedures, long experience in accounting for financial resources used in developing countries, and well-established auditing mechanisms. The report stated that the study team found no evidence that the agencies’ procedures were less elaborate, implementation less careful, or auditing less thorough for activities financed by the Multilateral Fund. UNEP serves as the Fund’s treasurer. As a part of its agreement with the Executive Committee, UNEP is responsible for obtaining and distributing contributions, entering into agreements with the implementing agencies, and submitting the Fund’s accounts to the Executive Committee for each calendar year. UNEP receives certified and/or audited reports from the implementing agencies, which provide aggregate expenditure or disbursement figures. These figures are reported annually to the Executive Committee. Because the Multilateral Fund is considered to be an integral part of UNEP’s and the United Nations’ accounts, its audits are the sole responsibility of the Internal and External Audit of the United Nations. The report of the United Nations Board of Auditors for the 2-year period ending December 31, 1995, reported findings and made recommendations related to UNEP’s program and financial management, procurement, and other areas, but the overall results revealed no material weaknesses or errors considered material to the accuracy, completeness, or validity of the financial statements as a whole. The auditors rendered an opinion that the financial statements presented fairly UNEP’s financial position and the results of its operations for that financial period; the statements were prepared in accordance with the stated accounting policies; and transactions were in accordance with the financial regulations and legislative authority. In addition, the implementing agencies are required to provide the Fund’s Executive Committee with an annual progress report on the implementation of approved work programs and activities related to country programs and projects. These reports include information on project approvals and disbursements; updates on project completions; global and regional project highlights; performance indicators; status of agreements and project preparation, by country; and administrative issues (operational, policy, financial, and others). Finally, each recipient country is required to report annually to the Executive Committee on the progress of the implementation of its country program. With regard to individual projects, the Multilateral Fund has developed a multilevel review process: The implementing agencies review project proposals to ensure that they meet eligibility criteria and arrange for external technical and cost reviews of investment projects before submitting them to the Fund Secretariat. The Secretariat determines whether the proposals meet eligibility and policy requirements and checks the proposed costs against data on past costs and suppliers’ estimates. It may also consult with outside experts on technical and cost issues before making a recommendation to the Executive Committee. The Executive Committee’s Project Review Subcommittee examines the recommended proposals and reports to the full committee. The Executive Committee’s individual members consider the Subcommittee’s report and may also assess the projects independently, sometimes requesting a fresh round of external technical and cost reviews before the Committee makes its final funding decisions. The project review process frequently results in significant alterations to projects, cost reductions, and in outright rejection of some projects. These alterations may be agreed to by the Secretariat and the implementing agency before a proposal is submitted to the Executive Committee, or they may occur during the meetings of the Project Review Subcommittee or the Executive Committee itself. Most often, the dialogue on these issues occurs mainly between the implementing agencies and the Fund Secretariat. The 1995 COWIconsult report concluded that the project review process introduces a strong element of discipline into the project development and approval procedure. COWIconsult reviewed a sample of 23 projects submitted to the Fund Secretariat and found that the Secretariat’s views appeared to carry great weight with the Executive Committee in that the review process resulted in cost reductions in 13 of the 23 projects, with an overall average reduction of 20 percent. Moreover, in 6 of the 23 projects reviewed there was a very significant difference in the amount of support originally requested and the final request reviewed by the Secretariat and the Executive Committee. Overall, the study concluded that the review process results in significant but not excessive reductions in the approved costs of projects supported by the Fund. We also reviewed a sample of projects to determine the current effect of the review process on the cost of projects, and as a result, on the cost-effectiveness of the Fund’s expenditures for them. We selected a sample of 10 projects approved by the Executive Committee in 1996 that comprised 7 investment projects, 2 technical assistance projects, and 1 institutional strengthening project. Each of the four implementing agencies was represented by two projects in the sample, which also included two bilateral projects. For seven of the projects, we found reductions ranging from 9 to almost 70 percent resulting from the review by the Secretariat staff, whose recommendations were generally endorsed by the Executive Committee. The overall average reduction for these seven projects amounted to 48 percent. However, this average is heavily influenced by major reductions on two very large investment projects made between the original submissions and the final approvals. Even the one bilateral investment project we reviewed had experienced a 23-percent reduction as a result of the review process. The three remaining projects were approved by the Executive Committee as originally submitted. An important consequence of the reduction in approved funding for these projects was that their cost-effectiveness, expressed in terms of dollar cost per kilogram of ozone-depleting substances eliminated, improved significantly. In the cases of the two projects with the largest percentage reductions in cost, the cost-effectiveness ratios went from $8.91/kg to $2.80/kg and from $6.95/kg to $4.12/kg, respectively. A third project had a similarly impressive improvement in cost-effectiveness, going from $6.42/kg to $3.97/kg. In addition to the control and review mechanisms and practices already in place, the Executive Committee has realized that it could strengthen its oversight by requiring project completion reports and developing a project monitoring and evaluation system. The Multilateral Fund is currently developing a uniform format for project completion reports that is expected to be submitted to the Executive Committee for its review before the end of 1997. In addition, a Subcommittee on Monitoring, Evaluation, and Finance was recently established to address the need for a monitoring and evaluation system. The Subcommittee developed a monitoring and evaluation program to be implemented over the next year. When fully implemented, the system may help the Executive Committee enhance the Fund’s effectiveness by drawing on lessons learned from completed projects. In addition to funds allocated for projects of various types, the Multilateral Fund pays the implementing agencies for administrative support costs associated with project implementation. These costs include, among other things, staff, office space, office equipment, and supplies; accounting, audit, and procurement services; management backup; and travel needed to properly oversee project implementation. In the case of UNDP, UNEP, and UNIDO, payment for administrative support has, by agreement with the Fund, been fixed at a flat 13-percent of the amount approved by the Executive Committee for projects’ implementation. In the case of the World Bank, up until mid-1995, reimbursement for administrative support was based on actual expenditures reported by the Bank. But, from that time forward, the Bank also has been compensated for administrative support on the 13-percent flat fee basis. This level of administrative support costs is generally consistent with prevailing administrative cost allowances within the United Nations system. Nevertheless, in 1994 some members of the Executive Committee began to question the continued appropriateness of a uniform 13-percent fee paid to the implementing agencies. They expressed the view that with the initial start-up phase of operations completed and with experience gained in implementing a wide assortment of projects and activities, the continued payment of administrative support at this level could result in unnecessarily high costs to the Multilateral Fund. At its March 1994 meeting, the Executive Committee requested that the Secretariat perform an analysis of each implementing agency’s administrative costs. The Secretariat contracted with a consultant to perform this study, who reported in September 1994 that the administrative cost levels were not excessive. In fact, the consultant’s report concluded that the flat 13-percent administrative support fee had been insufficient to cover all of the costs that the implementing agencies might legitimately have charged the Fund and, as a result, the agencies were, in effect, subsidizing part of the cost of projects. However, the report recognized that over time the Fund’s administrative costs could be expected to decline as a percentage of overall project costs as a result of getting past the high cost of start-up, greater experience and resulting increased efficiency, economies of scale, and other factors. In November 1996, the Parties to the Montreal Protocol directed the Executive Committee to work toward the goal, over the next 3 years, of reducing agency support costs to an average of below 10 percent to make more funds available for other activities. In February 1997, the Executive Committee decided that an independent consultant should be recruited to work with the Secretariat and implementing agencies to identify options and approaches for reducing the overall level of administrative costs, focusing on revising the current uniform, fee-based system. The Chief Officer of the Fund’s Secretariat informed us that a consultant has recently been selected to carry out this work and is expected to submit a report in September 1997. He said the Secretariat and the Executive Committee will be working over the next 3 years, in consultation with the implementing agencies, to reduce administrative support costs to an overall average of less than 10 percent. Because of the potential for considerable interest savings, we recommend that the Administrator of the Environmental Protection Agency and the Secretary of State implement an alternative payment method such as promissory notes or letters of credit for the U.S. contribution to the Multilateral Fund and seek the assistance of the Department of the Treasury in implementing this recommendation. In commenting on a draft of this testimony, the Environmental Protection Agency and the Department of State agreed in concept to our recommendation and are exploring options for using an alterative payment method. Mr. Chairman, this concludes my prepared remarks. At this point, I would be glad to respond to any questions you or Members of the Subcommittee may have. Country program preparation, institutional strengthening, and project preparation projects do not directly contribute to the phaseout of ozone-depleting substances. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed its work on the Montreal Protocol Multilateral Fund, focusing on: (1) principal contributors to the Fund; (2) principal recipients of disbursements made from the Fund; (3) the purposes for which disbursements were made; (4) what has been accomplished with these disbursements; and (5) the controls and accountability mechanisms in place to ensure proper use of money disbursed from the Fund. GAO noted that: (1) the United States is the largest contributor to the Multilateral Fund, accounting for about 25 percent of the contributions; (2) for 1997 through 1999, the United States is expected to contribute about $39 million per year; (3) GAO estimates that the United States could avoid interest expenses of between $2 million and $3 million associated with its annual contributions by using an alternative payment method; (4) from its establishment in 1991 through May 1997, the Multilateral Fund has allocated about $570 million for projects in more than 100 Article 5 countries; (5) China has been the largest recipient, accounting for almost $150 million or 26 percent of the total; (6) there are seven broad purposes for which projects have been funded, but over 80 percent of the funds have been for investment projects, which help businesses to convert their operations from the use of ozone-depleting substances and to cease the production of goods containing them; (7) projects approved to date are projected to phase out the annual use of about 84,000 ozone-depleting potential-weighted metric tons of ozone-depleting substances, or about 40 percent of the estimated consumption of ozone-depleting substances, in Article 5 countries; (8) the Multilateral Fund has a number of mechanisms in place that are designed to ensure that funds are properly accounted for and that the amounts of funds allocated to specific projects are reviewed and verified; (9) the Multilateral Fund currently pays a 13-percent administrative fee to the implementing agencies for their costs associated with project implementation; and (10) however, efforts are under way to evaluate the appropriateness of the fees, with the goal of reducing the support costs to about 10 percent over the next 3 years. |
FDA is responsible for regulating the marketing of medical devices to provide reasonable assurance of their safety and effectiveness for human use. As part of its regulatory responsibility, FDA reviews applications from manufacturers that wish to market their medical devices in the United States. Prior to marketing new devices, manufacturers must apply for FDA marketing approval through either the premarket notification (also referred to as 510(k)) process, or the premarket approval (PMA) process, a more rigorous regulatory review. New devices are subject to PMA, unless they are substantially equivalent to an already marketed device, in which case they need to comply only with the premarket notification requirements. Applications for premarket notification are generally reviewed more quickly than applications for PMA and do not usually require clinical data. Medical devices are regulated using a three-part classification system and are subject to different levels of control based upon their classifications as class I, II, or III devices. Class I devices are generally those with the lowest risk for use by humans and require the least regulatory oversight. These devices are subject to general controls, which include standards for good manufacturing practices, and requirements related to manufacturer registration, maintenance of records, and reporting. Examples of class I devices are patient examination gloves, canes, and crutches. Class II devices are generally of higher risk and are also subject to general controls; however, FDA can establish special controls for these devices, such as development and dissemination of guidance documents, mandatory performance standards, and postmarket surveillance. Examples of class II devices are blood glucose test systems and infusion pumps. Class III devices typically pose the greatest risk and thus have the highest level of regulation. This classification includes most devices that support or sustain human life, are of substantial importance in preventing impairment of human health, or present a potential unreasonable risk of illness or injury. Because general and special controls may not be sufficient to ensure safety and effectiveness, these devices, with limited exceptions, must obtain PMA. To obtain PMA, the manufacturer must provide FDA with sufficient valid scientific evidence providing reasonable assurance that the device is safe and effective for its intended use. Once approved, changes to the device affecting safety or effectiveness require the submission and approval of a supplement to its PMA. Examples of class III devices include automatic external defibrillators and implantable infusion pumps used to administer medication. Some class III devices are provided as part of a hospital visit; Medicare pays for these devices through the hospital inpatient or outpatient prospective payment systems. Five categories of class III devices, however, can be provided in physicians’ offices or prescribed by physicians for use in the home; Medicare pays for these devices through the DME fee schedule. In 2004, Medicare payments for class III devices under the DME fee schedule were $53.2 million, which represented less than 1 percent of total DME payments. The Medicare DME fee schedule payment rate for a device is based on either the manufacturer’s retail price or historic reasonable Medicare charges, which CMS considers equivalent measures. MMA provided for a 0 percent annual update for most Medicare DME fee schedule payment rates from 2004 through 2008. However, under MMA, class III devices were excluded from the 0 percent update and received payment updates equal to the annual percentage increase in the CPI-U in 2004, 2005, and 2006. For these devices, MMA provides, in 2007 for a payment update as determined by the Secretary of Health and Human Services, and in 2008, for a payment update equal to the annual percentage increase in the CPI-U. We found that with limited exceptions, manufacturers of class III devices have higher premarketing costs than do manufacturers of class II devices. Manufacturers of class III devices pay higher FDA user fees for review of their devices, because of the more complex FDA review required prior to marketing, than do manufacturers of class II devices. According to FDA data, compared to class II manufacturers, class III manufacturers have a longer period before approval during the FDA application process, which lengthens the time before they can market their devices and begin receiving revenue. FDA requires that manufacturers submit clinical data for class III devices, but only occasionally requires the same for class II devices. In addition, class III manufacturers stated they incur higher premarketing costs for other research and development than do manufacturers of class II devices. However, class II manufacturers also stated that they incur substantial premarketing costs related to other research and development. Because we did not evaluate proprietary data on other premarketing research and development costs, we could not determine whether a difference in other premarketing research and development costs exists between class III and class II manufacturers. Manufacturers of class III devices pay higher FDA user fees for review of their devices, because of the more complex FDA review required prior to marketing, than do manufacturers of class II devices. Specifically, manufacturers of class III devices subject to this review pay the FDA user fee for PMA, which in 2005 was $239,237 for each PMA. Most PMA supplements, which must be filed when a manufacturer makes a change to a class III device that affects its safety or effectiveness, also require payment of a fee, which ranged from $6,546 to $239,237. Manufacturers of class II devices pay the FDA user fee for each premarket notification, which in 2005 was $3,502. When a manufacturer makes a change to a class II device, a new premarket notification application must be filed; there is no supplement process for these devices. Manufacturers of class III devices have a longer period before approval during the FDA application process, which they stated delays the marketing of their devices and the receipt of revenue. According to ODE’s 2004 Annual Report, in 2004, the average time for PMA review was 503 days while the average time for premarket notification review was 100 days. These average times include the total time a PMA or premarket notification was under review by FDA and the time the manufacturer used in responding to any FDA requests for additional information. FDA requires that class III manufacturers submit clinical data, for which manufacturers incur costs. FDA only occasionally requires the submission of clinical data for class II devices. Specifically, FDA requires manufacturers of class III devices to submit clinical data as part of the PMA process to provide reasonable assurance that the devices are safe and effective for their intended uses. During its review of a device’s PMA application, FDA may require that the manufacturer provide additional information, which may require submission of additional clinical data. Manufacturers of class III devices stated that to collect clinical data, they conducted costly animal studies, human preclinical studies, and human clinical trials. Manufacturers of class II devices must satisfy premarket notification requirements; that is, they must submit documentation that a device is substantially equivalent to a legally marketed device. An FDA official stated that manufacturers of class II devices may be required to provide clinical data. They may be required to provide these data, for example, to demonstrate that modifications they have made to a device would not significantly affect its safety or effectiveness, or if a device is to be marketed for a new or different indication. According to FDA, 10 to 15 percent of premarket notification applications include clinical data. Manufacturers of class III devices we spoke with stated that in addition to collecting clinical data, they incur higher premarketing costs related to other research and development, such as labor costs and manufacturing supplies related to designing a device, than do manufacturers of other classes of devices. They stated that class III devices are highly innovative, complex products that require costly premarketing research and development to produce. One class III manufacturer we spoke with stated that approximately 10 percent of its revenue between 2002 and 2005 was invested in premarketing research and development. Another class III manufacturer stated that approximately 4 percent of its operating budget is spent on premarketing research and development. However, manufacturers of class II devices we spoke with also stated that they incur substantial premarketing costs related to research and development. Specifically, we spoke with a manufacturer of an insulin pump and two manufacturers of continuous positive airway pressure devices, each of which stated it incurs substantial research and development costs. One class II manufacturer stated that 10 to 15 percent of a device’s total cost was attributable to research and development. Another class II manufacturer stated that approximately 7 to 10 percent of its revenue is spent on research and development. Because we did not evaluate proprietary data for other premarketing research and development costs, we were unable to determine whether a difference in other premarketing research and development costs exists between class III and class II manufacturers. The CMS rate-setting methodology for Medicare’s DME fee schedule accounts for the premarketing costs of class II and class III devices in a consistent manner. The fee schedule payment rate for an item of DME, regardless of device classification, is based on either historic Medicare charges or the manufacturer’s retail price, which CMS has determined are equivalent measures. Manufacturers of both class II and class III devices we spoke with stated that when setting their retail prices, they take into account all premarketing costs necessary to bring the device to market. CMS has two DME fee schedule rate-setting methodologies: one method is for items that belong to a payment category covered by Medicare at the time the DME fee schedule was implemented in 1989, and one method is for items added to the DME fee schedule after 1989 that are not covered by an existing payment category. Regardless of its classification as a class I, II, or III device, the payment rate for an item of DME covered by Medicare when the DME fee schedule was implemented in 1989 is based on its average reasonable Medicare charge from July 1, 1986, through June 30, 1987, for some items, and July 1, 1986, through December 31, 1986, for other items (both referred to as the base year). Historically, these payment rates have been updated by a uniform, statutorily set, percentage, which is usually based on the annual percentage increase in the CPI-U. Generally, for items added to the fee schedule after 1989 that are not covered by an existing payment category, CMS does not have historic Medicare charges upon which to base the payment rate. CMS has determined that in these cases, the manufacturer’s retail price is a sufficient substitute to calculate the fee schedule payment amount, and CMS considers the payment amount that results from this methodology to be equivalent to historic reasonable Medicare charges. To determine the payment rate, CMS obtains the manufacturer’s retail price for the new item and uses a formula based on the cumulative annual percentage increase in the CPI-U to deflate the price to what it would have been in the base year. Using a formula based on the statutory DME fee schedule payment updates since the base year, CMS then inflates the base year price to the year in which the item was added to the fee schedule. In succeeding years, the item is updated by the applicable DME fee schedule update. The cumulative updates applied to DME are lower than the corresponding CPI-U increases because, in certain years, the statutory update was less than the CPI-U increase. Therefore, the payment rate of a device is generally lower than its retail price. Manufacturers of class III devices we spoke with, whose devices accounted for over 96 percent of class III DME payments in 2004, stated that when setting their retail prices, they take into account the premarketing costs of complying with federal agencies’ requirements, including the costs of collecting clinical data, and the costs of research and development. Manufacturers of class II devices similarly stated that they take into account the premarketing costs of complying with federal agencies’ requirements and of research and development, including any clinical data they may be required to collect. From 2004 through 2006, MMA provided for a payment update to the DME fee schedule for class III devices equal to the annual percentage increase in the CPI-U. In addition, for these devices, for 2007, MMA provided for a payment update to be determined by the Secretary of Health and Human Services, and for 2008, a payment update equal to the annual percentage increase in the CPI-U. From 2004 through 2008, for class II devices, however, MMA provided for a 0 percent payment update. Manufacturers of class III devices, with limited exceptions, have higher premarketing costs than manufacturers of class II devices, specifically, higher costs related to FDA user fees and submission of clinical data. However, class III and class II manufacturers we spoke with stated they take these premarketing costs, as well as premarketing research and development costs, into account when setting their retail prices. Because the initial payment rates for all classes of devices on the Medicare DME fee schedule are based on these retail prices or an equivalent measure, they account for the costs of class III and similar class II devices in a consistent manner. Distinct updates for two different classes of devices are unwarranted. The Congress should consider establishing a uniform payment update to the DME fee schedule for 2008 for class II and class III devices. We recommend that the Secretary of Health and Human Services establish a uniform payment update to the DME fee schedule for 2007 for class II and class III devices. We received written comments on a draft of this report from HHS (see app. II). We also received oral comments from six external reviewers representing industry organizations. The external reviewers were the Advanced Medical Technology Association (AdvaMed), which represents manufacturers of medical devices, and representatives from five class III device manufacturers—the four manufacturers of osteogenesis stimulators and one manufacturer of both implantable infusion pumps and automatic external defibrillators. In commenting on a draft of this report, HHS agreed with our recommendation to establish a uniform payment update to the DME fee schedule for 2007 for class II and class III devices. The agency did not comment on whether the Congress should consider establishing a uniform payment update to the DME fee schedule for 2008 for these devices. HHS agreed with our finding that the costs of class II and class III DME have been factored into the fee schedule amounts for these devices, noting that CMS is committed to effectively and efficiently implementing DME payment rules. It stated that our report did a thorough job of reviewing Medicare payment rules associated with the costs of furnishing class III devices. HHS also provided technical comments, which we incorporated where appropriate. Industry representatives who reviewed a draft of this report did not agree or disagree with our matter for congressional consideration or our recommendation for executive action. They did, however, express concern that we did not recommend a specific update percentage for class III devices. Our report recommends a uniform payment update to the DME fee schedule for class II and class III devices; we believe that this recommendation satisfies the requirement in MMA to make recommendations on the appropriate update percentage for class III devices. Two manufacturers of class III devices commented on the class II device manufacturers we interviewed. One manufacturer stated that it would have been more appropriate to interview manufacturers of class II devices that are not similar to class III devices in terms of complexity. The other manufacturer expressed concern that we did not speak with more class II manufacturers. The four osteogenesis stimulator manufacturers expressed concern that we did not examine costs they incur after they market a device. Specifically, several stated that they incur labor costs for services provided to beneficiaries and physicians, research and development costs related to FDA-required surveillance on osteogenesis stimulators’ safety, and research and development costs to improve or find new uses for a device. In addition, one manufacturer stated that it conducts costly research and development for some products that never come to market. Concerning comments about the class II manufacturers we interviewed, as noted in the draft report, our conclusion that class III devices have higher premarketing costs than do manufacturers of class II devices is based on FDA requirements and FDA data that apply to class III and class II manufacturers and not on information obtained from class III and class II manufacturers. According to FDA data, manufacturers of class III devices pay higher FDA user fees and have a longer period of time before approval during the FDA application process. FDA also requires that all class III manufacturers submit clinical data, for which manufacturers incur costs, and only occasionally requires the submission of clinical data for class II devices. Regarding manufacturers’ concerns that we did not examine all of their device-related costs, we included these costs in our analysis, where appropriate. With respect to labor costs for services provided to beneficiaries and physicians, to the extent that suppliers do perform these services, the costs are known prior to marketing the device and can be taken into account when setting their retail price. Two class III manufacturers we spoke with volunteered that they take these labor costs into account when setting retail prices prior to the device going to market. Regarding research and development costs for FDA-required surveillance, both class III and class II devices may be subject to surveillance on a case- by-case basis; prior to marketing, FDA notifies manufacturers that a device will be subject to postmarket surveillance. Also prior to marketing the device, manufacturers must submit, for FDA approval, a plan to conduct the required surveillance. As noted in the draft report, both class III and class II device manufacturers stated, that when setting their retail prices, they take into account the premarketing costs of complying with federal agencies’ requirements. With respect to research and development costs to improve or find new uses for a device after it is marketed, these are costs incurred to modify an existing device or develop a new device. Costs incurred for a future device are premarketing costs related to that device and not costs related to marketing the existing device. Finally, we did not examine research and development costs for products that do not come to market because these costs do not directly relate to items on the Medicare DME fee schedule; therefore, it would be inappropriate to consider them when reporting on the appropriate update percentage to items on the fee schedule. Industry representatives raised several issues that went beyond the scope of our report. These issues included the appropriateness of the DME rate- setting methodology, payment incentives that may lead providers to use one site of service over another, and incentives for manufacturers to bring new devices to the market. Reviewers also made technical comments, which we incorporated where appropriate. We are sending copies of this report to the Secretary of Health and Human Services, the Administrators of CMS and FDA, and appropriate congressional committees. We will also make copies available to others on request. In addition, the report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512- 7119 or kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To address our objectives, we interviewed officials from the Centers for Medicare & Medicaid Services (CMS); the Food and Drug Administration (FDA); two of the four durable medical equipment (DME) regional carriers, the contractors responsible for processing DME claims; and the Statistical Analysis DME Regional Carrier, the contractor that provides data analysis support to CMS. To examine the premarketing costs of devices, we obtained the fees that FDA charges for device review, known as user fees, which are published on the FDA Web site. We also reviewed the FDA device approval process, and data on device review times from FDA’s Office of Device Evaluation’s 2004 Annual Report. We interviewed the four manufacturers of osteogenesis stimulators and one manufacturer of both implantable infusion pumps and automatic external defibrillators, all class III medical devices, about the types of costs they incur in producing the devices, including FDA fees for device review and the costs of research and development, both for any clinical data the manufacturer is required to submit and for other research and development costs, such as labor costs related to designing a device. These class III manufacturers’ devices accounted for over 96 percent of class III Medicare DME payments in 2004. We also spoke with a manufacturer of insulin pumps and two manufacturers of continuous positive airway pressure devices, class II devices on the DME fee schedule that CMS identified as similar to the class III devices on the schedule in terms of complexity. We did not evaluate proprietary data to determine whether a difference in other premarketing research and development costs exists between the two types of manufacturers. To determine how the DME fee schedule accounts for premarketing costs, we interviewed CMS officials and reviewed CMS documents on the DME fee schedule rate-setting methodology. We interviewed representatives from the Advanced Medical Technology Association; the American Academy of Orthopedic Surgeons; the American Society of Interventional Pain Physicians; and two private insurance companies. We conducted our work from December 2004 through February 2006 in accordance with generally accepted government auditing standards. In addition to the contact named above, Nancy A. Edwards, Assistant Director; Joanna L. Hiatt; and Andrea E. Richardson made key contributions to this report. | Medicare fee schedule payments for durable medical equipment (DME) that the Food and Drug Administration (FDA) regulates as class III devices, those that pose the greatest potential risk, increased by 215 percent from 2001 through 2004. From 2004 through 2006, and for 2008, the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) provided for a payment update for class III DME equal to the increase in the consumer price index for all urban consumers (CPI-U). For 2007, MMA requires the Secretary of Health and Human Services to determine the payment update. MMA also requires that other DME receive a 0 percent update from 2004 through 2008. MMA directed GAO to report on an appropriate payment update for 2007 and 2008 for class III DME. In this report, GAO (1) examined whether class III devices have unique premarketing costs and (2) determined how the fee schedule rate-setting methodology accounts for the premarketing costs of such devices. GAO found that manufacturers of class III devices, with limited exceptions, have higher premarketing costs than do manufacturers of class II devices that are similar to class III devices. Premarketing costs consist of FDA user fees and research and development costs, both for any clinical data the manufacturer is required to submit and for other research and development costs. Manufacturers of class III devices pay higher FDA user fees, because of the more complex FDA review required prior to marketing, than do manufacturers of class II devices. Specifically, the user fee for class III devices subject to this review in 2005 was $239,237, while the fee for class II devices in 2005 was $3,502. The FDA application and approval process takes longer for class III manufacturers, which lengthens the time it takes before they can market their devices and begin receiving revenue. FDA requires that manufacturers submit clinical data for class III devices, but only occasionally requires the same for class II devices. In interviews with GAO, class III manufacturers stated that they incur higher premarketing costs for other research and development, such as labor costs related to designing a device, compared to manufacturers of class II devices. Class II manufacturers also told GAO that they incur substantial costs related to other research and development. GAO did not evaluate proprietary data to determine whether a difference in other premarketing research and development costs exists between the two types of manufacturers. GAO found that the Medicare DME fee schedule rate-setting methodology accounts for the respective premarketing costs of class II and class III devices in a consistent manner. Regardless of device classification, the Medicare DME fee schedule payment rate for a device is based on either the manufacturer's retail price or historic reasonable Medicare charges, which the Centers for Medicare & Medicaid Services considers equivalent measures. In interviews with GAO, manufacturers of class III devices stated that when setting their retail prices, they take into account the premarketing costs of complying with federal regulatory requirements, including the costs of required clinical data collection and other research and development. These manufacturers accounted for over 96 percent of class III DME payments in 2004. Manufacturers of class II devices also stated that they take into account these costs when setting retail prices. |
Although the Medicare and Medicaid EHR programs are generally similar, there are some differences related to the types of providers that are permitted to participate, the duration and amount of incentive payments and penalties, and information providers must submit to satisfy the programs’ requirements. The types of providers eligible to participate in the Medicare and Medicaid EHR programs—referred to as permissible providers—differ. See figure 1 below. Beginning in 2011, the first year of the Medicare and Medicaid EHR programs, the programs have provided incentive payments to eligible providers that met program requirements. Beginning in 2015, the Medicare EHR program is generally required to begin applying a penalty for hospitals and professionals that do not meet the Medicare EHR program requirements. Figure 2 below provides information on the years that incentive payments are available and that penalties, if applicable, will be assessed for professionals and hospitals under the Medicare and Medicaid EHR programs. The amount of incentive payment varies depending on the type of provider (professionals or hospitals) and the program in which the provider participates (Medicare EHR program or Medicaid EHR program). For example, in the Medicare EHR program, professionals cannot earn more than $18,000 in incentive payments in their first year, and, over a 5-year period, payments cannot exceed a total of $44,000. In contrast, in the Medicaid EHR program, professionals cannot earn more than $21,250 in incentive payments in the first year and $8,500 during each of 5 subsequent years for a total of $63,750. (See app. II for more information on the amounts of incentive payments available under both programs and how the amounts are calculated.) To receive incentive payments from either the Medicare or Medicaid EHR To programs, providers must meet eligibility and reporting requirements.do so, providers report certain information to CMS, the states, or to both—a process referred to as “attestation”—by entering certain information into CMS’s or the states’ EHR program web-based attestation tools. Providers that, based on information submitted to CMS and the states, meet the requirements receive incentive payments. Some of the eligibility and reporting requirements for the Medicare EHR program differ from those in the Medicaid EHR program. To receive Medicare EHR incentive payments in 2011, professionals had to meet three eligibility and three reporting requirements, while hospitals had to meet two eligibility and two reporting requirements. (See table 1.) One noteworthy reporting requirement for 2011 was that providers were required to demonstrate meaningful use of certified EHR technology by collecting and reporting information to CMS on various measures established by CMS. Specifically, in 2011, professionals had to report on a total of 20 meaningful use measures, and hospitals had to report on a total of 19 meaningful use measures. This information had to be collected over 90 consecutive days during 2011. Professionals. Of the 20 meaningful use measures for professionals, 15 are mandatory. Of those 15 mandatory measures, 6 measures allow professionals to claim exemptions—that is, they may report to CMS that those measures are not relevant to their patient populations or clinical practices. measures—“report clinical quality measures to CMS”—requires professionals to report on at least 6 clinical quality measures identified by CMS. Professionals have the flexibility to choose the remaining 5 meaningful use measures from a menu of 10 measures. One of the mandatory meaningful use Hospitals. Of the 19 meaningful use measures hospitals must report, 14 are mandatory. Of those 14 mandatory measures, 3 measures allow hospitals to claim exemptions. Similar to professionals, to satisfy the mandatory meaningful use measure “report clinical quality measures to CMS,” hospitals must report on 15 clinical quality measures identified by CMS. Hospitals have the flexibility to choose the remaining 5 meaningful use measures from a menu of 10 measures. See appendix III for a listing of the meaningful use measures and clinical quality measures for 2011. In order to meet the definition of meaningful use, eligible professionals and hospitals must report on measures specified by CMS. An exclusion for a nonapplicable measure is permitted if the provider meets certain requirements specified in the regulation. 42 C.F.R. § 495.6. In this report we use the term “exemption” to refer to the exclusion of a nonapplicable measure. To receive Medicaid EHR incentive payments during 2011, professionals had to meet seven eligibility requirements, hospitals had to meet six eligibility requirements, and both hospitals and professionals had to meet one reporting requirement. (See table 2.) Compared to the Medicare EHR program, the Medicaid EHR program requirements had two noteworthy differences in 2011. Providers had to meet a patient volume requirement. This requirement was established to ensure that providers that receive incentive payments from the Medicaid EHR program serve a minimum volume of Medicaid patients, or, for certain professionals, a minimum volume of needy patients. Specifically, professionals must have a Medicaid patient volume of at least 30 percent unless they are pediatricians or practice predominantly in a federally qualified health center or rural health center; hospitals generally must have a Medicaid patient volume of at least 10 percent. Providers only had to adopt, implement, or upgrade to a certified EHR system in 2011 and did not have to demonstrate meaningful use during the first year they participate in the Medicaid EHR program. However, in subsequent years, they must demonstrate meaningful use. individual or group primary care practices with 10 or fewer professionals; public, rural, and critical access hospitals; community health centers and rural health clinics; collaborative networks of small practices; and other settings that predominantly serve medically underserved populations, as defined by each Regional Extension Center. ONC also provides funding for Regional Extension Centers to provide assistance to certain hospitals—critical access and rural hospitals—to ensure that centers’ services are available in those settings. ONC’s overall goal for the Regional Extension Center program is to help 100,000 professionals meet the EHR programs’ requirements for meaningful use by 2014 and to help a total of 1,777 critical access and rural hospitals meet the EHR programs’ requirements for meaningful use by 2014. In its agreement with ONC, each Regional Extension Center established its own goal for the number of providers it would assist to help the program meet its overall goal. CMS and the four states we reviewed are implementing processes to verify whether providers met the Medicare or Medicaid EHR programs’ eligibility and reporting requirements and, therefore, qualified to receive incentive payments in the programs’ first year. Although CMS is taking some steps to improve the processes CMS and states use to verify whether providers have met Medicare and Medicaid EHR program requirements, we found that CMS has additional opportunities to assess and improve these processes. For the first program year, CMS is implementing a combination of pre- and postpayment processes to verify whether providers have met all of the Medicare EHR program eligibility and reporting requirements. In addition, the four states we reviewed have implemented or plan to implement a combination of pre- and postpayment processes to verify whether providers have met Medicaid EHR program eligibility and reporting requirements. CMS has developed and begun to implement processes to verify whether providers participating in the Medicare EHR program have met all of the program’s eligibility and reporting requirements and thereby qualify to receive incentive payments. In 2011, CMS implemented prepayment processes to verify whether providers have met all three of the Medicare EHR program’s eligibility requirements. These processes consist of automatic checks that are built into CMS’s databases to verify the information submitted by providers when they register for the program. CMS also implemented a process to verify, on a prepayment basis, whether providers have met one of the Medicare EHR program’s reporting requirements—to use a certified EHR system. Specifically, CMS built an automatic check to compare the EHR certification numbers for the systems providers reported using during attestation against a list of EHR systems that have been certified by ONC. In 2012, according to CMS officials, the agency plans to implement additional processes to verify, on a postpayment basis, whether a sample of providers has met all three of the Medicare EHR program’s reporting requirements. To conduct these verifications, CMS has developed a risk- based approach that will be used to identify a sample of about 10 percent of professionals and 5 percent of hospitals for audits. planned audit strategy, the agency may request that providers selected for postpayment audits submit documentation, such as patient rosters, EHR screenshots, and reports generated by the EHR system to support data the providers reported to CMS during attestation. If CMS determines during the audits that a provider has failed to meet any one of the reporting requirements, it plans to take steps to recoup incentive payments. CMS officials said that they decided to wait until 2012 to begin conducting audits of providers that received incentive payments in 2011, the first payment year, to ensure that the agency does not unfairly target a disproportionate number of early participants in the Medicare EHR program. For an overview of CMS’s processes to verify whether providers met the Medicare EHR program’s eligibility and reporting requirements, see table 3. In addition, according to CMS officials, the agency plans to conduct a separate audit, beginning in 2012, to verify that providers had the certified EHR systems they attested to using. For these audits, CMS anticipates sampling roughly 20 percent of professionals and 10 percent of hospitals, identified through random sampling as well as some targeted selection. Three of the states we reviewed—Iowa, Kentucky, and Pennsylvania— have implemented processes to verify whether providers have met all the Medicaid EHR program’s eligibility and reporting requirements and thereby qualify to receive incentive payments. The fourth state, Texas, has implemented processes to verify whether providers met most of the program’s eligibility and reporting requirements and is in the process of developing additional verification processes as part of its postpayment audit strategy. Because CMS allows states flexibility in determining how they verify compliance with these requirements, the states vary in terms of whether they use prepayment or postpayment verification processes. In order to verify whether providers have met the Medicaid EHR program’s eligibility requirements, all four states have primarily implemented prepayment processes, some of which are automated checks built into their databases. Iowa, Kentucky, and Pennsylvania also conduct postpayment audits of samples of providers to verify whether they have met requirements that were not checked on a prepayment basis. These states identify samples of providers to be audited using various risk-based approaches. Texas intends to conduct postpayment audits as well, but has not finalized its audit strategy. Three states—Iowa, Kentucky, and Pennsylvania—use a combination of pre- and postpayment processes to verify whether providers have met the eligibility requirement regarding the Medicaid patient volume threshold, which is determined by dividing a professional’s number of Medicaid patient visits by their total number of patient visits. For example, they use Medicaid claims data to verify, on a prepayment basis, the professionals’ number of Medicaid patient visits over the reporting period. Then, on a postpayment basis for a sample of professionals, the states use documentation submitted by professionals, such as patient billing reports, to verify their total number of patient visits. Most states, including these three, must rely on provider self-reported information to verify compliance with this requirement, because states typically do not collect data on some of the professionals’ patient visits, such as visits paid for by private insurance. To verify whether providers have met the Medicaid EHR program’s reporting requirement to adopt, implement, or upgrade to a certified EHR system, the four states we reviewed use prepayment processes, postpayment processes, or both. The four states we reviewed have implemented processes, on a prepayment basis, that check the EHR certification numbers reported by providers against a list of EHR systems that have been certified by ONC.steps to verify, on a prepayment basis, compliance with this requirement by reviewing documentation, such as EHR invoices. Iowa and Pennsylvania include a similar verification process as part of their postpayment audits. Texas has not yet determined whether it will conduct additional postpayment verifications. For an overview of the four selected states’ processes to verify whether providers met the Medicaid EHR program’s eligibility and reporting requirements, see table 4. Most providers participating in the first year of the Medicare EHR program through December 8, 2011, exercised program flexibility to exempt themselves from reporting on at least one mandatory meaningful use measure. In addition, many providers also reported at least one clinical quality measure based on few patients. During the first year of the Medicare EHR program through December 8, 2011, most participating providers exercised flexibility allowed under the program to claim an exemption from reporting at least one mandatory meaningful use measure. Specifically, 72.4 percent of professionals and 79.6 percent of hospitals claimed such an exemption. Providers may exempt themselves from reporting certain mandatory meaningful use measures—up to six measures for professionals and up to three measures for hospitals—if they report to CMS that those measures are not relevant to their patient populations or clinical practices. We found that a greater percentage of some professionals reported at least one exemption than other professionals. Specifically, we found that a greater percentage of chiropractors, dentists, optometrists, specialists, and other eligible physicians reported at least one exemption compared to generalists; and a greater percentage of professionals with 2010 Medicare Part B charges at or below the 75th percentile reported at least one exemption compared to those with charges above the 75th percentile. We also found that among specialists, the largest specialty group of participating professionals, over three-quarters claimed at least one exemption. (See table 5.) We found that a greater percentage of some hospitals reported at least one exemption than other hospitals. Specifically, we found that a greater percentage of critical access hospitals reported at least one exemption compared to acute care hospitals, and a greater percentage of hospitals with less than 200 beds reported at least one exemption compared to hospitals with 200 beds or more. We also found that among acute care hospitals, the largest type of participating hospital, slightly over three-quarters claimed at least one exemption. (See table 6.) Of the mandatory meaningful use measures for which providers may claim exemptions, we found that the majority of providers claimed an exemption from the mandatory measure “provide patients with an electronic copy of their health information.” Providers may claim an exemption from this measure if they receive no requests from patients for an electronic copy of their health information. This measure was the least frequently reported mandatory measure for both professionals (32.7 percent) and hospitals (30.3 percent). In contrast, the most frequently reported mandatory measure for which exemptions were permitted was “record smoking status for patients 13 years old or older” for both professionals (99.4 percent) and hospitals (99.5 percent). Our finding that a majority of providers claimed exemptions from reporting at least one mandatory meaningful use measure is consistent with comments made by stakeholders in response to CMS’s Rule on the Electronic Health Record Incentive Program. Specifically, those stakeholders stated that certain providers, including specialists and small hospitals, would not be able to report all mandatory meaningful use measures, since some measures would be outside the scope of their practice. While CMS currently allows providers the flexibility to claim exemptions from reporting certain mandatory meaningful use measures, in future years of the EHR programs, CMS stated that it may not allow providers the same flexibility. It is unclear what effect, if any, such a change would have on participation levels in future program years. Our analysis of clinical quality measures found that many providers reported at least one such measure based on few patients—less than seven—during the first year of the Medicare EHR program through December 8, 2011. Providers were required to report these measures to satisfy one of the mandatory meaningful use measures—”report clinical quality measures to CMS.” Specifically, 41.3 percent of professionals and 86.9 percent of hospitals reported at least one clinical quality measure based on few patients. Clinical quality measures calculated using few patients may be statistically unreliable, which, according to the American Hospital Association and others, could detract from providers’ abilities to use those measures as meaningful tools for quality improvement. We found that a greater percentage of some professionals reported measures based on few patients than other professionals. Specifically, we found that a greater percentage of chiropractors, dentists, optometrists, specialists, podiatrists, and other eligible professionals reported at least one clinical quality measure that was calculated using few patients compared to generalists; a greater percentage of professionals practicing in urban locations reported at least one clinical quality measure that was calculated using few patients compared to those practicing in rural locations; and a greater percentage of professionals with 2010 Medicare Part B charges at or below the 50th percentile or above the 75th percentile reported at least one clinical quality measure that was calculated using few patients compared to those with charges above the 50th percentile, but at or below the 75th percentile. We also found that about half of specialists, the largest specialty group of participating professionals, reported at least one clinical quality measure based on few patients. (See table 7.) We found that a greater percentage of some hospitals reported measures based on few patients than other hospitals. Specifically, we found that a greater percentage of critical access hospitals reported at least one clinical quality measure that was calculated using few patients compared to acute care hospitals, a greater percentage of government-owned and proprietary hospitals reported at least one clinical quality measure that was calculated using few patients compared to nonprofit hospitals, a greater percentage of hospitals with less than 200 beds reported at least one clinical quality measure that was calculated using few patients compared to hospitals with 200 beds or more, and a greater percentage of hospitals located in rural areas reported at least one clinical quality measure that was calculated using few patients compared to hospitals located in urban areas. We also found that among acute care hospitals, the largest type of participating hospital, more than 80 percent reported at least one clinical quality measure based on few patients. (See table 8.) The American Medical Association and others stated that some providers may experience challenges selecting clinical quality measures to report. CMS has acknowledged that the availability of clinical quality measures that are relevant to providers’ patient populations and clinical practices is important to inform providers’ efforts to improve quality of care and to measure potential impacts of the EHR programs. In an effort to increase the availability of such measures, officials from the Health Information Technology Policy Committee and the Health Information Technology Standards Committee, which advise ONC on the development of meaningful use reporting requirements, noted that additional clinical quality measures may be added to the EHR programs over time. This action would help to ensure that there are a sufficient number of measures that providers can report on. Providers identified challenges to participating in the first year of the Medicare and Medicaid EHR programs and strategies used to help providers participate. Numerous professionals and hospitals have signed agreements with Regional Extension Centers for technical assistance, which includes services to facilitate providers’ participation in the Medicare and Medicaid EHR programs. Acquiring and implementing a certified EHR system are among the first challenges providers face as they take steps to qualify for a Medicare or Medicaid EHR incentive payment. Challenges to acquiring EHR systems described by providers and officials from the American Medical Association and American Hospital Association we interviewed included the following: the cost of purchasing or upgrading to a certified EHR system; obtaining sufficient broadband access, which can affect providers’ abilities to exchange health information; and obtaining buy-in from professionals. Challenges to implementing EHR systems described by providers we interviewed included needing to train staff on how to use the EHR systems and getting professionals to use the systems. Officials we interviewed from hospitals described strategies providers used to overcome some of the challenges related to acquiring and implementing EHR systems. For example, one hospital official stated that, in order to implement a certified EHR system, hospital officials designated “super users” as a strategy to help their professionals transition to the EHR system. For instance, one hospital appointed a nurse as a “super user” who assisted others in learning how to use the EHR system. Additionally, the chief information officer of another hospital stated her organization obtained buy-in from professionals and encouraged them to use the system by presenting the EHR system as a way to improve patient safety and quality of care rather than as only an information technology project. Once a certified EHR system is acquired and implemented, ensuring the system is effectively used to meet the Medicare meaningful use reporting requirements can also be challenging for some providers. Specifically, providers and others we interviewed identified challenges related to capturing data needed to demonstrate meaningful use, such as lacking a workflow that allowed the needed data to be collected electronically at the right time by the right staff member. Providers we interviewed noted several strategies they used to capture data in ways which helped them demonstrate meaningful use, including the following: understanding which fields of the EHR system must be completed and collecting additional data, as necessary; revising forms, retraining staff so they knew how to complete the forms, and conducting quality assurance training to ensure that the appropriate data were being captured consistently; and analyzing workflow, including understanding which staff members are to enter information into the EHR system and when data entry must occur. One provider we interviewed elaborated on the strategy she used to change the workflow in her practice so that she could satisfy the meaningful use measure—”provide patients with clinical summaries for each office visit.” She decided that to meet this meaningful use measure she would provide the clinical summary to her patients before they left her office. To do so, she changed her workflow by spending an additional 45 minutes each morning preparing parts of her patient notes in advance of the patient visit and by scheduling additional time in between patient visits in order to complete the clinical summaries. As of December 2011, about 115,000 professionals and about 1,000 hospitals have signed agreements to receive technical assistance from one of the 62 Regional Extension Centers. This assistance includes services to facilitate providers’ participation in the Medicare and Medicaid EHR programs. Of these professionals, 54,241 had implemented an EHR system, of which 4,072 had demonstrated meaningful use. The professionals assisted by the Regional Extension Center program work in targeted settings, such as individual primary care practices or rural health clinics. See figure 4, which illustrates the practice settings of professionals who have agreements with the Regional Extension Centers. In addition, 1,001 rural hospitals and critical access hospitals have signed agreements with a Regional Extension Center for technical assistance, through December 19, 2011. Of these hospitals, 243 had implemented an EHR system and of those, 41 had demonstrated meaningful use. For more information on each Regional Extension Center’s progress in assisting providers to demonstrate meaningful use, see appendix IV. Regional Extension Centers offer various services to providers with whom they have agreements to facilitate the providers’ participation in the EHR programs by helping them meaningfully use EHR systems. Providers trying to demonstrate meaningful use generally follow a four-step process, throughout which Regional Extension Centers may provide assistance to providers. These steps are: (1) prepare to participate in the CMS EHR programs, (2) select a certified EHR system, (3) implement the selected EHR system, and (4) demonstrate meaningful use. Examples of the services offered by the Regional Extension Centers during each of these steps are described in figure 5. During the first step, Regional Extension Center officials can help providers prepare to participate in the EHR programs by explaining those programs’ requirements and helping providers identify how their workflow and processes may change with the introduction of an EHR system. For example, officials from one Regional Extension Center told us they helped providers determine whether they would qualify for the Medicare or Medicaid EHR programs. During the second step, the Regional Extension Centers can help providers select a certified EHR system. For example, officials from one Regional Extension Center told us they shared a vendor evaluation tool with providers, which helped providers evaluate factors such as EHR systems’ capabilities and cost. During the third step, Regional Extension Center officials can help providers implement an EHR system by, for example, suggesting best practices for securing and protecting the privacy of personal health information stored and processed by the EHR system. During the fourth step, the Regional Extension Centers provide services that help providers to meet the EHR programs’ meaningful use criteria. For example, the Regional Extension Centers may help their clients identify approaches for satisfying certain program reporting requirements by helping providers capture and exchange health data. The aim of the Medicare and Medicaid EHR programs is not just to increase EHR adoption, but to support the meaningful use of EHR technology to improve quality and reduce the cost of care. As a result, the programs have the potential to affect the millions of people who receive care through Medicare or Medicaid. Since the programs began in 2011, CMS has issued $3.1 billion in incentive payments to providers. As a new program with particular complexities—such as the number and types of measures providers must report—there are risks to program integrity, and CMS could take steps, beyond those already taken, to assess and mitigate the risk of improper payments and to improve program efficiency. It is encouraging that CMS has awarded contracts to evaluate states’ implementation of the Medicaid EHR program, including their efforts to prevent improper payments. However, CMS, while planning to assess its audit strategy for the Medicare EHR program, has not yet specified time frames for implementing this assessment. As CMS moves forward, it is important that the agency assess whether verifying additional reporting requirements on a prepayment basis could improve the integrity of the Medicare EHR program. Conducting prepayment verifications may be more effective in minimizing improper payments because CMS’s planned postpayment audits will be conducted for only a small sample of providers, whereas CMS’s prepayment verification processes are conducted for all providers that apply for incentive payments. In addition, prepayment verifications help to avoid the difficulties associated with the “pay and chase” aspects of recovering improper payments. We identified two opportunities for CMS to improve the efficiencies of the Medicare and Medicaid EHR programs. First, CMS identified and took action to improve the efficiency of audits under the Medicaid EHR program but did not take a similar action in the Medicare EHR program. Specifically, although CMS suggested that states collect additional information from providers at the time of attestation to improve the efficiency of the postpayment audit process, CMS has not done so for the Medicare EHR program, but acknowledged that this action would be beneficial. Doing so would improve the efficiency of the postpayment audit process for the Medicare EHR program. Second, CMS could offer states the option of having CMS collect Medicaid providers’ meaningful use attestations on their behalf rather than requiring states to collect this information on their own. CMS, by offering to collect this information from all Medicaid providers on behalf of states, as the agency currently does for some Medicaid providers, could alleviate the need for many states to create and maintain similar web-based attestation tools and could potentially yield cost savings at both the federal and state levels. In order to improve the efficiency and effectiveness of processes to verify whether providers meet program requirements for the Medicare and Medicaid EHR programs, we recommend that the Administrator of CMS take the following four actions: Establish time frames for expeditiously implementing an evaluation of the effectiveness of the agency’s audit strategy for the Medicare EHR program. Evaluate the extent to which the agency should conduct more verifications on a prepayment basis when determining whether providers meet Medicare EHR program’s reporting requirements. Collect the additional information from Medicare providers during attestation that CMS suggested states collect from Medicaid providers during attestation. Offer states the option of having CMS collect meaningful use attestations from Medicaid providers on their behalf. We provided a draft of this report to HHS for comment. In its written comments (reproduced in app. V), HHS concurred with three of our recommendations to CMS. Specifically, we are encouraged that HHS said that to help implement these recommendations, CMS will evaluate the effectiveness of the audit strategy for the Medicare EHR program on an ongoing basis and document results quarterly, beginning approximately 3 months after the audits begin. In addition, CMS will evaluate the feasibility of conducting additional prepayment verifications under the Medicare EHR program. Further, CMS will explore collecting additional information from Medicare providers during attestation that CMS has suggested that states collect under the Medicaid EHR program. HHS disagreed with our fourth recommendation that CMS offer to collect meaningful use attestations data from Medicaid providers on behalf of the states, citing two reasons. First, HHS does not believe there are significant barriers to states implementing attestation tools. It stated that the 43 states participating in the Medicaid EHR program have established a means for providers to attest to eligibility requirements and the adoption, implementation, or upgrade of their EHR. In HHS’s view, incorporating the meaningful use attestations tools into the states’ existing systems does not pose a barrier in part because HHS says CMS has taken steps to help the states design their attestation tools and has approved designs developed by vendors that the states can use. Second, HHS does not believe that implementing this recommendation would create a streamlined attestation process for Medicaid providers. It states that Medicaid providers would have to provide certain information to CMS and other information to the states, requiring providers to submit data to multiple sites. HHS believes this change could result in confusion and payment delays. In addition, HHS believes a more compelling challenge is designing a way for providers to report clinical quality measures electronically from their EHRs to the states and CMS. HHS stated that CMS established pilots that are intended to help providers leverage existing infrastructure to electronically exchange data on clinical quality measures directly from their EHRs to CMS. Despite HHS’s objections, we continue to believe that our recommendation should be implemented. In response to HHS’s first reason, we believe that while some states have created tools to collect Medicaid attestation data, over the long run implementing our recommendation could improve the efficiency of the Medicaid EHR program and thereby minimize additional administrative costs, especially in the program’s future years. Currently, both CMS and states create and maintain meaningful use attestation tools. The Medicaid EHR program requirements in the second year of the program and through the rest of the decade will become increasingly similar to the requirements for the Medicare EHR program as will the information collected from providers by the states and CMS. Having both CMS and states design and maintain systems to collect much of the same information is inefficient. Further, it is expected that in future years, to demonstrate meaningful use, Medicare and Medicaid providers will be required to report additional information, and both CMS and the states will need to expend resources to update the attestation tools used to collect this information, a point we clarified in our report. By collecting meaningful use attestations on behalf of some states and U.S. insular areas, CMS could help ensure effective use of the $300 million that Congress provided for administrative costs of the Medicaid EHR program from 2009-2016. In response to HHS’s second reason, the report notes that under the current process for registering for the Medicaid EHR program, providers must already submit information on eligibility to both CMS and the states. Therefore, providers are familiar with submitting information to multiple sites. Furthermore, CMS currently collects meaningful use attestations for some Medicaid providers and has not reported that the transfer of this information to the states has delayed payments. We agree with CMS that designing a means to electronically transmit meaningful use information, including clinical quality measures, directly from providers’ EHRs to CMS and the states may present challenges. It is encouraging that the agency is attentive to electronic data exchange issues and is working with providers in the Medicare program to identify ways to leverage existing infrastructure to accomplish this goal. However, it is important for CMS to consider all approaches, including collecting meaningful use data on behalf of states, to ensure the Medicare and Medicaid EHR programs are administered as efficiently as possible. As part of HHS’s written response, the department also provided other general comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services, the Administrator of CMS, the National Coordinator for Health Information Technology, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at kohnl@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix V. This appendix provides additional details regarding our analysis of (1) measures providers reported to the Centers for Medicare and Medicaid Services (CMS) to demonstrate meaningful use and (2) Regional Extension Center data. Analysis of measures providers reported to CMS to demonstrate meaningful use. We conducted several analyses of data from CMS’s National Level Repository that providers reported to CMS to demonstrate meaningful use under the Medicare electronic health records (EHR) program in 2011. We analyzed data submitted by providers from April 18, 2011, the date CMS began collecting these data, through December 8, 2011.information because they were required to report these data by November 30, 2011, to receive a Medicare EHR incentive payment for 2011. In contrast, the data we analyzed for professionals did not include full-year information because CMS permitted them to submit these data through February 29, 2012, to receive a Medicare EHR incentive payment for 2011. We included all hospitals and professionals that, according to data from CMS’s National Level Repository, had successfully demonstrated meaningful use even though some of those providers had not received Medicare EHR program incentive payments from CMS as of December 8, 2011. As a result, the data we analyzed for hospitals included full-year Specifically, we analyzed meaningful use and clinical quality measures providers reported to CMS and which we obtained from CMS’s National Level Repository to identify the following: Frequency of measures reported. We identified the frequency with which providers reported the mandatory meaningful use measures for which providers may claim exemptions. Six measures allow professionals to claim exemptions and three measures allow hospitals to claim exemptions if, according to the providers, those measures are not relevant to their patient populations or clinical practices. Extent to which providers claimed allowable exemptions from reporting certain mandatory measures. We determined the percentage of providers that claimed an exemption from reporting at least one mandatory meaningful use measure. As part of this analysis, we examined whether a greater percentage of certain types of providers reported at least one exemption compared to other types of providers. Extent to which providers had patients who could be included in the calculation of clinical quality measures. We examined the extent to which providers had few patients who could be included in the calculation of at least one clinical quality measure. Measures that capture a small number of patients may be unreliable measures of quality because relatively small changes in the number of patients who experienced the care processes or outcomes targeted by the measure can generate large shifts in the calculated percentage for the measure. CMS has recognized in other programs that including a small number of patients in the calculation of a measure is a reliability issue. For example, on the agency’s Hospital Compare website, which publicly reports clinical quality measures by hospital, CMS indicates whether the number of patients included in a particular measure calculation was based on less than 25 patients and thus too small to reliably tell how well the hospital was performing. For our analysis, we identified clinical quality measures as unreliable if fewer than seven patients met inclusion criteria for the calculation. The reporting period for the first year a provider demonstrates meaningful use is any 90 consecutive days during the year; for subsequent years, the reporting period is the full year. Assuming a steady patient population, providers that had fewer than seven patients meet inclusion criteria for calculating clinical quality measures during the 90-day reporting period would have fewer than 25 patients meet these criteria during the full-year reporting period. As part of this analysis, we examined whether a greater percentage of certain types of providers reported at least one clinical quality measure based on few patients compared to other types of providers. We also analyzed other data sources to determine whether the reporting of meaningful use and clinical quality measures varied based on providers’ characteristics, such as whether critical access hospitals were more likely than acute care hospitals to claim an exemption from reporting at least one mandatory meaningful use measure. We used Chi-square likelihood tests to determine whether differences in provider characteristics were statistically significant. In particular, we analyzed data from the following sources: CMS’s Online Survey, Certification, and Reporting System (downloaded May 2011); CMS’s National Plan and Provider Enumeration System Downloadable File (downloaded October 2011); the Health Resources and Services Administration’s 2009-2010 Area Resource File (released August 2010); and CMS’s 2010 Medicare Part B claims (downloaded February 2012). Using these data, we examined the following provider characteristics: Hospital type. We obtained data on hospital type—acute care or critical access hospital—from CMS’s Online Survey, Certification, and Reporting System. Hospital ownership type. We obtained data on hospital ownership type from CMS’s Online Survey, Certification, and Reporting System. We created the ownership type of proprietary by selecting proprietary; the ownership type of nonprofit by combining voluntary nonprofit – church, voluntary nonprofit – private, and voluntary nonprofit – other; and the ownership type of government-owned by combining the four government designations (federal, state, local, and hospital district or authority). Hospital number of beds. We obtained data on the number of beds in hospitals, which includes beds that are certified for payment for Medicare and/or Medicaid, from CMS’s Online Survey, Certification, and Reporting System. Using those data, we created four categories for the number of beds: (a) 1 to 49 beds, (b) 50 to 99 beds, (c) 100 to 199 beds, and (d) 200 or more beds. Professional specialty. We obtained data on professionals’ primary specialty from CMS’s National Plan and Provider Enumeration System Downloadable File. Then, with the assistance of a crosswalk that we obtained from CMS that aggregates specialty taxonomy codes into a smaller number of specialties, we created the following seven professional specialty categories: (a) chiropractor, (b) dentist, (c) generalist, (d) optometrist, (e) podiatrist, (f) specialist, and (g) other eligible physician. Of those professionals who demonstrated meaningful use in the Medicare EHR program in 2011, we were unable to identify a primary specialty for 164 professionals (less than 0.7 percent) using the CMS downloadable file. The 900 professionals that were classified as “other eligible physicians” (about 3.8 percent) includes physicians for whom the information on professional specialty needed to classify them into one of the other professional specialty categories was not available in CMS’s National Plan and Provider Enumeration System; however, we determined that those professionals had specialty types that were eligible to receive incentive payments using other CMS databases. Professionals’ Medicare Part B charges. We obtained all 2010 Medicare Part B charges from CMS. For each professional (identified by National Provider Identifier), we summed the amount of Medicare Part B charges over the year. Subsequently, we created four categories by aggregating total charges by professional: (a) less than or equal to the 25th percentile, (b) greater than the 25th percentile and less than or equal to the 50th percentile, (c) greater than the 50th percentile and less than or equal to the 75th percentile, and (d) greater than the 75th percentile. Of those professionals who demonstrated meaningful use in the Medicare EHR program in 2011, information on the amount of Part B charges was missing for 359 professionals (about 1.5 percent). Provider location. We obtained zip codes for facility or practice locations for hospitals and professionals from CMS’s Online Survey, Certification, and Reporting System and CMS’s National Plan and Provider Enumeration System, respectively. Then, with the assistance of a zip code to Federal Information Processing Standard code crosswalk file we obtained from CMS, we used the Health Resources and Services Administration’s Area Resource File to identify whether providers were located in a metropolitan area—an area that has at least one urbanized area of 50,000 people. We then categorized providers located in metropolitan areas as being located in urban areas and providers that were not as being located in rural areas. We were unable to match 20 providers’ zip codes to the Area Resource File (which is less than 0.1 percent of participating professionals). To ensure the reliability of the data we analyzed, we interviewed officials from CMS, reviewed relevant documentation, and conducted electronic testing to identify missing data and obvious errors. On the basis of these activities, we determined that the data we analyzed were sufficiently reliable for our analysis. Analysis of Regional Extension Center data. We analyzed data we obtained from the Office of the National Coordinator for Health Information Technology (ONC) in December 2011. The data, which the agency collects from Regional Extension Centers, contains information about the providers to whom the centers provided technical assistance. We determined the number of providers assisted by the Regional Extension Center program as well as the percentage of those providers overall and for each center that had (1) signed an agreement with a center, (2) implemented an EHR, and (3) demonstrated meaningful use. In addition, we determined the types of professionals who had signed an agreement for technical assistance with a center. We made some adjustments to the data we obtained for professionals based on information obtained from officials at ONC. Specifically, we limited our analysis to professionals identified by a Regional Extension Center as being priority primary care providers, which are types of professionals for which ONC reimburses centers for providing technical assistance. This excluded 7,019 professionals (about 5.7 percent) from our analysis. We also excluded from our analysis professionals whose data we determined were unreliable based on information obtained from ONC officials. Specifically, we excluded any professionals who were missing or had anomalous entries for both an individual national provider identifier and an organizational national provider identifier. This excluded 355 professionals (about 0.3 percent) from the analysis. We also excluded another 2 professionals (less than 0.1 percent) who were identified in the data as being a type of professional that was not considered to be a priority primary care provider even though the professional was designated as such in the ONC data. We also made some adjustments to the data we obtained for hospitals based on information obtained from officials at ONC. Specifically, we limited our analysis to hospitals identified by a Regional Extension Center as being a type of hospital targeted for outreach—that is, a critical access hospital or rural hospital. This excluded four organizations (about 0.4 percent) from the analysis. To ensure the reliability of the data we analyzed, we interviewed officials from ONC, reviewed relevant documentation, and conducted electronic testing to identify obvious errors. On the basis of these activities, we determined that the data we analyzed were sufficiently reliable for our analysis. Appendix II: How Medicare and Medicaid EHR Program Incentive Payments Are Calculated Provider type EHR program Professionals Medicare EHR program The amount of incentive payment in any given year is equal to 75 percent of the professional’s Medicare Part B charges for the year, subject to an annual limit which varies by year. The amount of the incentive payment in the first year cannot exceed $18,000 and the total over a 5-year period cannot exceed $44,000. Medicaid EHR program The amount of incentive payment that a professional receives in any given year is, in general, a fixed amount; $21,250 in the first year and $8,500 in up to 5 subsequent years and the total amount over a 6-year period cannot exceed $63,750. Professionals must receive an incentive payment by calendar year 2016 in order to receive incentive payments in subsequent years. Medicare EHR program For acute care hospitals, the amount of incentive payment in any given year is generally based on the hospital’s annual discharges and Medicare share (i.e., percentage of inpatient days at the hospital in a given year attributable to Medicare patients). Incentive payments are awarded over periods of up to 4 years. To earn the maximum amount, acute care hospitals must first demonstrate meaningful use in fiscal year 2011, 2012, or 2013. For critical access hospitals, the incentive payment amount is generally based on the hospital’s Medicare share and the reasonable costs incurred for the purchase of depreciable assets necessary to administer certified EHR technology, such as computers and associated hardware and software. Critical access hospitals can earn payments for up to 4 years. To earn the maximum amount, critical access hospitals must first demonstrate meaningful use in fiscal year 2011 or 2012. Medicaid EHR program The amount of incentive payment that a hospital receives in any given year is generally based on the hospital’s annual discharges and Medicaid share. The number of years over which incentive payments are awarded (between 3 to 6 years) is at the discretion of the state. CMS will increase the incentive payments that would otherwise apply by 10 percent each year for Medicare professionals that predominantly furnish services in geographic areas designated as health professional shortage areas, such as areas that have a shortage of primary medical care. To demonstrate meaningful use in the first year of the Medicare EHR program, professionals must report on a total of 20, and hospitals must report on a total of 19, meaningful use measures. For certain meaningful use measures, providers may report to CMS that the measures are not relevant to them; this is referred to as claiming an exemption. Furthermore, to satisfy the requirement for one of the meaningful use measures “report clinical quality measures to CMS,” providers must report on clinical quality measures identified by CMS.the number of meaningful use measures and clinical quality measures providers must report for the first year of the Medicare EHR program. Table 10 describes the meaningful use measures, and table 11 and table 12 describe the clinical quality measures for professionals and hospitals, respectively. Regional Extension Centers report to the Office of the National Coordinator for Health Information Technology (ONC) data that describes the progress they have made in providing technical assistance to professionals or hospitals to help those providers meaningfully use EHRs. The data the Regional Extension Centers report to ONC describe the following three milestones in the technical assistance provided: The professional or hospital signs an agreement with a Regional Extension Center to receive technical assistance. The professional or hospital implemented an EHR which has electronic prescribing and measure reporting functionality. The professional or hospital demonstrated meaningful use, consistent with the Medicare and Medicaid EHR programs’ requirements. When the program was established, ONC also required each of the 62 Regional Extension Centers to set a targeted numbers of professionals and hospitals each center would assist—that is, the center’s goal for the number of providers it would help meaningfully use EHRs. ONC uses the data the Regional Extension Centers report for each of the three milestones in the technical assistance process as well as the goals each center established to evaluate the effectiveness of individual Regional Extension Centers and of the program as a whole. Tables 13 and 14 list the goals and number of professionals and hospitals, respectively, assisted towards meaningful use by each center. In addition to the contact named above, E. Anne Laffoon, Assistant Director; Julianne Flowers; Krister Friday; Melanie Krause; Shannon Legeer; Monica Perez-Nelson; Amanda Pusey; and Stephen Ulrich made key contributions to this report. | The Health Information Technology for Economic and Clinical Health (HITECH) Act established the Medicare and Medicaid electronic health records (EHR) programs. CMS and the states administer these programs which began in 2011 to promote the meaningful use of EHR technology through incentive payments paid to certain providersthat is, hospitals and health care professionals. Spending for the programs is estimated to total $30 billion from 2011 through 2019. Consistent with the HITECH Act, GAO (1) examined efforts by CMS and the states to verify whether providers qualify to receive EHR incentive payments and (2) examined information reported to CMS by providers to demonstrate meaningful use in the first year of the Medicare EHR program. GAO reviewed applicable statutes, regulations, and guidance; interviewed officials from CMS; interviewed officials from four states, which were judgmentally selected to obtain variation among multiple factors; and analyzed data from CMS and other sources. The Centers for Medicare and Medicaid Services (CMS), an agency within the Department of Health and Human Services (HHS), and the four states GAO reviewed are implementing processes to verify whether providers met the Medicare and Medicaid EHR programs requirements and, therefore, qualified to receive incentive payments in the first year of the EHR programs. To receive such payments, providers must meet both (1) eligibility requirements that specify the types of providers eligible to participate in the programs and (2) reporting requirements that specify the information providers must report to CMS or the states, including measures that demonstrate meaningful use of an EHR system and measures of clinical quality. For the Medicare EHR program, CMS has implemented prepayment processes to verify whether providers have met all of the eligibility requirements and one of the reporting requirements. Beginning in 2012, the agency also has plans to implement a risk-based audit strategy to verify on a postpayment basis that a sample of providers met the remaining reporting requirements. For the Medicaid EHR Program, the four states GAO reviewed have implemented primarily prepayment processes to verify whether providers met all eligibility requirements. To verify the reporting requirement, all four states implemented prepayment processes, postpayment processes, or both. CMS officials stated that the agency intends to evaluate how effectively its Medicare EHR program audit strategy reduces the risk of improper EHR incentive payments, though the agency has not yet established corresponding timelines for doing this work. Such an evaluation could help CMS determine whether it should revise its verification processes by, for example, implementing additional prepayment processes, which GAO has shown may reduce the risk of improper payments. In addition, CMS has opportunities to improve the efficiency of verification processes by, for example, collecting certain data on states behalf. CMS allows providers to exempt themselves from reporting certain measures if providers report that the measures are not relevant to their patients or practices. Measures calculated based on few patients may be statistically unreliable, which limits their usefulness as tools for quality improvement. CMS and others acknowledged that the availability of measures that are relevant to providers patients and practices and are statistically reliable is important to provide useful information to providers. Among participants in the first year of the Medicare EHR program, the majority of providers chose to exempt themselves from reporting on at least one meaningful use measure and many providers reported at least one clinical quality measure based on fewless than sevenpatients. GAO is making four recommendations to CMS in order to improve processes to verify whether providers met program requirements for the Medicare and Medicaid EHR programs, including opportunities for efficiencies. HHS agreed with three of GAOs recommendations, but disagreed with the fourth recommendation that CMS offer to collect certain information on states behalf. GAO continues to believe that this action is an important step to yield potential cost savings. |
OMB Circular A-126 sets forth executive branch policy with respect to the management and use of government aviation assets. The purpose of the circular is to minimize cost and improve the management of government aircraft. The circular provides that government aircraft must be operated only for official purposes. Under the circular, there are three kinds of official travel: Travel to meet mission requirements: Mission requirements are defined as “activities that constitute the discharge of an agency’s official responsibilities,” and the circular provides examples of these kinds of activities. For purposes of the circular, mission requirements do not include official travel to give speeches, attend conferences or meetings, or make routine site visits. Required use travel: Agencies are permitted to use government aircraft for nonmission travel where it is required use travel—which is travel that requires the use of government aircraft to meet bona fide communications needs, security requirements, or exceptional scheduling requirements of an executive agency. Other travel for the conduct of agency business: Government aircraft are also available for other travel for the conduct of agency business when no commercial airline or aircraft is reasonably available to fulfill the agency requirement or the actual cost of using a government aircraft is not more than the cost of using commercial airline or aircraft service. In addition to other requirements for federal agencies, the circular directs agencies that use government aircraft to report semiannually to GSA each use of such aircraft for nonmission travel by senior federal officials, members of the families of such officials, and any nonfederal travelers, with certain exceptions. The circular provides that the format of the report is to be specified by GSA, but must list all travel during the preceding 6-month period and include the following information: the name of each such traveler, the official purpose of the trip, and the destination(s), among other things. The circular provides for one exception to these reporting requirements: Agencies using the aircraft are not required to report classified trips to GSA, but must maintain information on those trips for a period of 2 years and have the data available for review as authorized. In addition, in a memorandum to the heads of executive departments and agencies and employees of the Executive Office of the President, the President specifically directed that “all use of Government aircraft by senior executive branch officials shall be documented and such documentation shall be disclosed to the public upon request unless classified.” The OMB bulletin implementing this memorandum explains that “it is imperative that we not spend hard-earned tax dollars in ways that may appear to be improper.” GSA has issued regulations applicable to federal aviation activities. The FTR implements statutory requirements and executive branch policies for travel by federal civilian employees and others authorized to travel at government expense. The FMR generally pertains to the management of federal property and includes a specific part on management of government aircraft. As shown in table 1, the FTR specifically exempts from reporting trips that are classified, but does not contain any exemption for reporting by intelligence agencies. In contrast, the FMR states that intelligence agencies are exempt from the requirement to report to GSA on government aircraft. According to senior GSA officials, although the exemption for intelligence agencies is contained in the FMR—which largely deals with the management of federal property—it applies to reporting requirements in the FTR for senior federal officials who travel on government aircraft. Through issued executive branch documents, agencies are required to provide data about senior federal official nonmission travel—except for classified trips—to GSA, and GSA has been directed to collect this specified information. Accordingly, through its regulations, GSA has directed agencies to report required information on senior federal official travel; however, its regulations allow certain trips not to be reported, in addition to classified trips. Specifically, GSA exempted intelligence agencies from reporting any information on senior federal travel on government aircraft regardless of whether it is classified or unclassified. This is inconsistent with executive branch requirements we indentified. GSA has not articulated a basis—specifically, a source of authority or rationale—that would allow it to deviate from collecting what it has been directed to collect by the President and OMB. This could undermine the purposes of these requirements, which include aiding in the oversight of the use of government aircraft and helping to ensure that government aircraft are not used for nongovernmental purposes. Further, GSA officials stated that it is the agency’s practice to implement regulations that do not introduce real or potential conflicts with other authorities. According to GSA senior officials, the agency is unable to identify the specific historical analysis for inclusion of the intelligence agencies’ reporting exemption in the FMR. GSA added the exemption for intelligence agency reporting of information on government aircraft to the FMR in 2002; however, there is no explanation for the inclusion of the exemption in the regulation or implementing rule. GSA senior officials told us that the exemption for intelligence agencies enabled intelligence agencies to comply with Executive Order 12333, which requires the heads of departments and agencies with organizations in the intelligence community or the heads of such organizations, as appropriate, to “protect intelligence and intelligence sources and methods from unauthorized disclosure with guidance from the Director of Central Intelligence.” However, GSA has not articulated how an exemption for senior federal official travel data for nonmission purposes is necessary for agencies to Identifying an adequate basis for comply with Executive Order 12333.the intelligence agency reporting exemption or removing the exemption from its regulations if an adequate basis cannot be identified could help GSA ensure its regulations for senior federal official travel comply with executive branch requirements. GSA aggregates the data reported by agencies on senior federal travel to produce publically available reports describing the use of government aircraft by senior federal officials and how government aircraft are used to support agency missions. Specifically, these Senior Federal Travel Reports provide analysis on the number of trips taken by senior federal officials, the costs of such trips, the number of agencies reporting, and the number and costs of trips taken by cost justification. The reports also list those departments, agencies, bureaus, or services that report no use of senior federal official travel during the reporting time frame. According to the reports, they are intended to provide transparency and better management and control of senior federal official use of government aircraft and the ability to examine costs as they relate to trip use justifications. Specifically, according to the FBI, the exemption contained in FMR § 102-33.385 applies to all data on government aircraft stated in FMR § 102-33.390, which includes senior federal travel information. The FBI also determined that the exemption applies to all of the FBI, not just the intelligence elements, and includes all flights, both mission and nonmission. did not indicate that additional flights may have been omitted on the basis of GSA’s exemption for intelligence agencies. GSA senior officials told us that they cannot identify which organizations, components, or offices of departments or agencies within the intelligence community do not report senior federal official travel data to GSA. These officials stated that this is because they do not distinguish between instances where an agency reports no information because the agency is invoking the exemption and instances where the agency reports no information for some other reason, such as that no flights were taken on agency aircraft. Asking agencies to identify instances where they are invoking the exemption would better position GSA to collect and report on this information. Standards for Internal Control in the Federal Government calls for agencies to establish controls, such as those provided through policies and procedures, to provide reasonable assurance that agencies and operations comply with applicable laws and regulations. These standards also call for the accurate and timely recording of transactions and events to help ensure that all transactions are completely and accurately recorded, as well as for an agency to have relevant, reliable, and timely Further, GSA officials stated that it could be possible communications. to obtain follow-up information from agencies that did not provide travel data in order to determine why agencies had not reported data. Collecting additional information on which agencies are invoking the exemption and including such information in its reports could help ensure more complete reporting on the use of government aircraft, which could help provide GSA with reasonable assurance that its Federal Official Travel Reports are accurate and also provide the public a more comprehensive understanding of these trips. departments and agencies, the only exception for the reporting of this kind of travel is for classified trips. However, GSA has established an exception to these reporting requirements that is inconsistent with the executive branch requirements that gave GSA authority to collect senior federal travel data. GSA has not identified the basis—specifically, a source of authority or rationale—for this exemption as applied to senior federal official travel for nonmission purposes that would allow for it to deviate from executive branch specifications. Identifying an adequate basis for the intelligence agency reporting exemption or removing the exemption from its regulations if an adequate basis cannot be identified could help GSA ensure its regulations for senior federal official travel comply with executive branch requirements. In addition, collecting additional information on which agencies or organizations within the federal government are utilizing this exemption, and including such information in its Senior Federal Travel Reports, could help provide GSA with reasonable assurance that its published reports using these data are accurate. We recommend that the Administrator of GSA take the following two actions: To help ensure that GSA regulations comply with applicable executive branch requirements, identify an adequate basis for any exemption that allows intelligence agencies not to report to GSA unclassified data on senior federal official travel for nonmission purposes. If GSA cannot identify an adequate basis for the exemption, GSA should remove the exemption from its regulations. To help ensure the accuracy of its Senior Federal Official Travel Reports, collect additional information from agencies on instances where travel is not being reported because of an exemption for intelligence agencies, as opposed to some other reason, and include such information in its reports where departmental data do not include trips pursuant to an agency’s exercise of a reporting exemption. We provided a draft of this report to GSA for review and comment. GSA provided written comments which are reprinted in appendix I and summarized below. In commenting on our report, GSA concurred with both of the recommendations and identified actions to address them. In response to our recommendation that GSA identify an adequate basis for the intelligence agency exemption as applied to senior federal official travel for nonmission purposes, or remove it from its regulations, GSA stated that it will remove the exemption. Specifically, GSA stated that it will remove section 102.33.390(b) in Subpart E of the FMR, "Reporting Information on Government Aircraft.” This action will remove the reporting requirement related to senior federal official travel from the FMR and such reporting will continue to be governed by the FTR. As a consequence, the exemption for intelligence agencies, which is only contained in the FMR, will no longer be applicable to unclassified data on senior federal official travel for nonmission purposes. In response to our recommendation that GSA collect and report additional information from agencies on instances where travel is not being reported because of an exemption for intelligence agencies, GSA stated that it will add indicator data elements for agencies to identify when classified data is withheld from the senior federal official travel data they submit to GSA. These actions, when fully implemented, will address both of our recommendations. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Attorney General and other interested parties. This report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9627 or maurerd@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. In addition to the contact named above, Chris Currie, Assistant Director; Chris Ferencik, Analyst in Charge; Janet Temko, Senior Attorney; and Mary Catherine Hult made significant contributions to the work. | The federal government owns or leases over 1,700 aircraft to accomplish a wide variety of missions. Federal agencies are generally required to report trips taken by senior federal officials on their aircraft to GSA unless the trips are classified pursuant to executive branch requirements. In February 2013, GAO reported on DOJ senior executives' use of DOJ aviation assets for nonmission purposes for fiscal years 2007 through 2011. GAO identified several issues with respect to the implementation of a provision of GSA regulations that exempts intelligence agencies from reporting information about government aircraft to GSA and that provision's application to unclassified data on senior federal official travel for nonmission purposes. GAO was asked to review GSA's oversight of executives' use of government aircraft for nonmission purposes. This report addresses the extent to which (1) GSA's reporting exemption for intelligence agencies is consistent with executive branch requirements and (2) GSA ensures the accuracy of its reporting on the use of government aircraft by senior federal officials. GAO reviewed relevant executive branch requirements and GSA regulations, as well as data submitted by DOJ to GSA on trips taken by senior federal officials on DOJ aircraft and interviewed GSA officials. The exemption in General Services Administration (GSA) regulations that allows intelligence agencies not to report unclassified data on senior federal official travel for nonmission purposes is not consistent with executive branch requirements, and GSA has not provided a basis for deviating from these requirements. Specifically, executive branch documents—including Office of Management and Budget (OMB) Circular A-126, OMB Bulletin 93-11, and a 1993 presidential memorandum to the heads of all executive departments and agencies—require agencies to report to GSA, and for GSA to collect data, on senior federal official travel on government aircraft for nonmission purposes, except for trips that are classified. As a result, GSA is not collecting all specified unclassified data as directed, and GSA has not provided a basis for deviating from executive branch requirements. Identifying an adequate basis for the intelligence agency reporting exemption or removing the exemption from its regulations if a basis cannot be identified could help GSA ensure its regulations for senior federal official travel comply with executive branch requirements. GSA aggregates data on senior federal official travel to create publically available Senior Federal Official Travel Reports that, among other things, provide transparency of senior federal officials' use of government aircraft. However, GSA does not determine which agencies' travel is not reported under the exemption for intelligence agencies. For example, in February 2013 GAO found that the Federal Bureau of Investigation (FBI)—which is a member of the intelligence community—did not report to GSA, based on the intelligence agency exemption, information for 395 unclassified nonmission flights taken by the Attorney General, FBI Director, and other Department of Justice (DOJ) executives from fiscal years 2009 through 2011. However, GSA's Senior Federal Official Travel Reports GAO reviewed for those years provided information on flights for other DOJ components but did not indicate that additional flights may have been omitted on the basis of GSA's exemption for intelligence agencies. GSA senior officials stated that they do not collect this information because they do not distinguish between instances where an agency reports no information because it is invoking the exemption or some other reason, such as that no flights were taken on its aircraft. However, these officials also stated that it could be possible to obtain follow-up information from agencies that did not provide travel data in order to determine why agencies had not reported data. Consistent with Standards for Internal Control in the Federal Government , if GSA collected additional information from agencies on instances where nonmission travel was not reported because of the exemption for intelligence agencies, as opposed to some other reason, and included such information in its reports, it could help GSA ensure the accuracy of its Senior Federal Official Travel Reports. GAO recommends that GSA identify the basis of its reporting exemption, and collect additional information when travel is not being reported. GSA concurred and identified actions to address our recommendations. |
The MMPA was enacted in 1972 to ensure that marine mammals are maintained at or restored to healthy population levels. Among other things, this act established the Marine Mammal Commission, which must continually review the condition of marine mammal stocks and recommend to the appropriate federal officials and Congress any steps it deems necessary or desirable for the protection and conservation of marine mammals. In 1994, the MMPA was amended to create a process for establishing take reduction teams to manage incidental takes––serious injury or death––in the course of commercial fishing operations. Commercial fishing in areas where marine mammals swim, feed, or breed is considered one of the main human causes of incidental take. Marine mammals can become entangled in fishing equipment such as nets or hooks, although specific threats vary by the fishing techniques used. Appendix II provides details on commercial fishing techniques that can result in incidental take, including gillnetting, longlining, trap/pot fishing, and trawling, as well as examples of the marine mammals affected. Under the 1994 amendments to the MMPA, NMFS must establish take reduction teams when two requirements are satisfied: (1) NMFS designates the stock as strategic in a final stock assessment report, and (2) the stock interacts with a commercial fishery listed as Category I or II in the current list of fisheries. According to the MMPA, if there is insufficient funding to develop and implement take reduction plans for all stocks that meet the requirements, NMFS should establish teams based on specified priorities. For the majority of stocks, NMFS determines strategic status by comparing whether human-caused mortality exceeds the maximum removal level (see fig. 1). Human-caused mortality and serious injury (hereafter known as human-caused mortality) is estimated by adding fishery-related mortality estimates to mortality caused by other human sources, as follows: Fishery-related mortality and serious injury estimates (hereafter known as fishery-related mortality estimates) are generated based on data from NMFS’s fishery observer programs, whereby individuals board commercial fishing vessels and document instances of incidental take. NMFS also uses anecdotal information from scientists, fishermen, and others about additional incidental take to make these estimates. Mortality and serious injury caused by other human sources such as collisions with large ships or authorized subsistence hunting of marine mammals by Alaska natives. The maximum removal level—technically known as the potential biological removal level––is calculated for each marine mammal stock by multiplying three factors: The minimum population estimate (hereafter known as the population size estimate) for the specific stock of marine mammals. Two adjustments designed to (1) factor in the expected rate of natural growth for a stock and (2) reduce the risks associated with data uncertainties, especially for stocks listed as endangered or threatened or designated as depleted. By altering the values of these adjustments, NMFS can make the maximum removal level more conservative––meaning that fewer incidental takes will be allowed––in cases of uncertain data, and therefore make it less likely that they will identify a stock as nonstrategic. The MMPA requires NMFS to assess the status of each stock under its jurisdiction and determine whether it is strategic or not. NMFS publishes annual stock assessment reports that include, among other things, the strategic status of each marine mammal stock and the information used to make these strategic status determinations. Information contained in the reports must be based on the best scientific information available. NMFS’s Fishery Science Centers are responsible for publishing the stock assessment reports, and the Office of Protected Resources, along with NMFS regional offices, is responsible for using the data from the reports to decide whether to establish a take reduction team. Regional Scientific Review Groups––composed of individuals with expertise in marine mammal biology, commercial fishing technology and practices, and other areas––review all stock assessment reports prior to publication. NMFS also uses fishery-related mortality estimates and maximum removal levels in the stock assessment reports to categorize fisheries in its annual list of fisheries. Under the amended MMPA, commercial fisheries are classified as Category I if they have frequent incidental take of marine mammals and as Category II if they have occasional take. Once a stock is identified as requiring a take reduction team––because it is strategic and interacts with a Category I or II fishery––the MMPA requires NMFS to establish a team and appoint take reduction team members. The MMPA requires the take reduction team members to develop and submit a draft take reduction plan designed to reduce the incidental take of marine mammals by commercial fishing operations. If NMFS lacks sufficient funding to develop and implement a take reduction plan for all stocks that satisfy the MMPA’s requirements, the MMPA directs NMFS to give highest priority to take reduction plans for those stocks (1) for which incidental mortality and serious injury exceed the maximum removal level, (2) with a small population size, and (3) that are declining most rapidly. The MMPA requires that draft take reduction plans be developed by consensus among take reduction team members. If take reduction team members cannot reach consensus, the members must submit the range of possibilities they considered and the views of both the majority and minority to NMFS. These draft plans may include regulatory measures (known as take reduction regulations) such as gear modifications or geographical area closures that fisheries would be required to follow and voluntary measures such as research plans for identifying the primary causes and solutions for incidental take or education and outreach for commercial fishermen. After the take reduction team members develop and submit a draft take reduction plan to NMFS, the agency must publish a proposed plan in the Federal Register. The MMPA requires NMFS to take the team’s draft plan into consideration when it develops a proposed plan but does not require adoption of the draft plan. If the team fails to meet its deadline for submitting a draft plan to NMFS, the MMPA requires NMFS to develop and propose a plan on its own. For strategic stocks, the proposed plan must include measures NMFS expects will reduce incidental take below the maximum removal level within 6 months of the plan’s implementation. Once the proposed plan is published in the Federal Register, NMFS must solicit public comments on the plan before the agency finalizes and implements it by publishing a final plan in the Federal Register. NMFS’s development and publication of proposed and final plans are subject to several laws, including the following: Endangered Species Act: The act requires consultation among federal agencies including NMFS and the U.S. Fish and Wildlife Service to ensure that any take reduction plan is not likely to jeopardize the continued existence of any endangered or threatened species. National Environmental Policy Act: The act requires NMFS to evaluate the likely environmental effects of any take reduction plan using an environmental assessment or, if the plans will likely have significant environmental effects, a more detailed environmental impact statement. Regulatory Flexibility Act: The act requires NMFS to assess the economic impact of any take reduction plan on small entities. The proposed and final take reduction plans are also subject to the requirements of the Coastal Zone Management Act, Information Quality Act, Magnuson-Stevens Act, and the Paperwork Reduction Act, among others. In addition to these laws, the proposed and final take reduction plans are subject to the requirements of four executive orders. For example, one executive order requires NMFS to submit the proposed and final take reduction plans to the Office of Management and Budget (OMB) for review if NMFS or OMB determines that the plan is a significant regulatory action. The 1994 amendments to the MMPA provide deadlines to establish take reduction teams and develop and publish proposed and final plans. Table 1 outlines these statutory requirements and deadlines. Significant limitations in available information make it difficult for NMFS to accurately determine which marine mammal stocks meet the statutory requirements for establishing take reduction teams. The MMPA states that stocks are strategic––one of two triggers for establishing a take reduction team––if their human-caused mortality exceeds maximum removal levels. However, the information NMFS uses to calculate human-caused mortality or the maximum removal level for most stocks is incomplete, outdated, or imprecise, a fact that may lead NMFS to overlook some marine mammal stocks that meet the statutory requirements for establishing take reduction teams and inappropriately identify others as meeting them. NMFS officials said that funding constraints limit their ability to gather sufficient data, although the agency has taken steps to identify its data needs. Our review of stock assessment reports from 2007 found that NMFS was missing key information to make well-informed strategic status determinations for a significant number of marine mammal stocks. According to the MMPA, a stock is designated strategic––one of two triggers for establishing a take reduction team—if the human-caused mortality estimate exceeds the maximum removal level. Our review of stock assessment reports from 2007 found that 39 of 113 stocks are either missing human-caused mortality estimates or maximum removal levels, making it impossible to determine strategic status in accordance with the MMPA requirements. As a result, for these 39 stocks, NMFS is determining strategic status without key information and therefore might not accurately determine whether a stock requires a take reduction team. According to NMFS officials, maximum removal level and human-caused mortality estimates are often missing because scientists have been unable to gather the necessary data to make these determinations. In the absence of human-caused mortality estimates or maximum removal levels, NMFS must make more subjective––and potentially inaccurate–– strategic status determinations for some marine mammal stocks. In these cases, NMFS guidance directs scientists to use professional judgment to determine whether a stock is strategic. According to NMFS officials, scientists may use a variety of sources to make these decisions, including scientists’ field observations of the marine mammals. However, Marine Mammal Commission officials we spoke with stated that decisions based on professional judgment are less certain than those based on data about human-caused mortality and maximum removal levels and could result in some marine mammal stocks that should be identified as strategic not being identified as such. Even in cases where the stock assessment reports include human-caused mortality estimates and maximum removal levels for a stock, the human- caused mortality estimates may be inaccurate because the information on which they are based is incomplete. Human-caused mortality estimates are based in part on fishery-related mortality. However, according to Marine Mammal Commission officials, in some cases, mortality may be occurring in fisheries where NMFS does not systematically collect mortality information. Specifically, NMFS’s observer programs––a key source of data NMFS uses to calculate fishery-related mortality estimates-––gather information for only half of the total fisheries, but incidental take may also be occurring in some fisheries that are not observed, especially those that are classified as Category I or II. Observer program officials told us that funding limitations prohibit coverage of all Category I or II fisheries. In addition, our review of 2007 stock assessment reports found instances where fishery-related mortality estimates were missing important information. For example, NMFS scientists identified spinner and bottlenose dolphins in Hawaii as nonstrategic, but raised concerns about these decisions because the estimates of fishery-related mortality for the stocks were likely to be incomplete. Specifically, they stated that while the agency has observer program data showing that incidental take from a longline fishery was below the maximum removal level, it did not have observer programs in gillnet fisheries that were also likely to incidentally take the stocks, and therefore might have increased the fishery-related mortality estimate if these fisheries had been observed. Furthermore, NMFS, Marine Mammal Commission, and Scientific Review Group scientists expressed concern that strategic status decisions for some stocks may not be accurate because NMFS does not have all of the information needed to define the stocks accurately. Under the MMPA, marine mammal species are treated as stocks—populations located in a common area that interbreed when mature. However, a 2004 NMFS report found that the stock definitions for 61 percent of marine mammal stocks were potentially not accurate. For example, a stock definition would not be accurate if NMFS defined two distinct populations of a marine mammal species incorrectly as one stock. If one of these two populations has a high level of incidental take and the other does not, the combined human- caused mortality estimate might not be high enough to result in a strategic status determination. However, if the two distinct populations were defined as two stocks, the high incidental take of one stock could result in it being considered strategic and triggering one of the requirements for take reduction team establishment. The Alaska Scientific Review Group has raised concerns that inaccurate stock definitions may be leading to incorrect strategic status designations. Specifically, in a 2007 letter to NMFS, the review group said that recent scientific information indicates that the current stock definitions might inappropriately consolidate harbor seal populations in Alaska. The review group chair said that this consolidation may lead to some harbor seal populations being incorrectly categorized as nonstrategic. Our review of a sample of stock assessment reports found that approximately 11 of the 74 stocks used outdated information–– information that is 8 years old or older––to calculate the maximum removal level, thereby reducing the reliability of the strategic status determinations for these stocks. According to NMFS guidelines, information that is 8 years old or older is generally unreliable for estimating the current stock population. NMFS scientists estimate the size of a stock’s population to determine its maximum removal level. If human- caused mortality exceeds maximum removal levels, the stock is considered strategic. However, when the data are 8 years old or older, scientific research has shown that marine mammal stocks could have declined significantly since the data were collected. This could lead NMFS to inaccurately designate a stock as nonstrategic and therefore not establish a take reduction team when one might be needed. In addition, if a stock’s population has increased significantly during the time period since the last estimate was made, NMFS may inaccurately designate the stock as strategic. Furthermore, our review found that for approximately 21 of the 74 stocks, the population size information was between 5 and 8 years old, a situation that is less of a concern than data that are 8 years old or older, but could also lead NMFS to make an inaccurate strategic stock determination. NMFS and Marine Mammal Commission scientists stated that scientists’ confidence in the accuracy of the information used to estimate population size begins to decrease even before 8 years. Also, a 2004 NMFS report to Congress stated that estimates for population size based on information 5 years old or older may not accurately represent a marine mammal stock’s current population size. Our review of a sample of stock assessment reports from 2007 frequently found that NMFS used population size or fishery-related mortality estimates that were less precise than NMFS’s guidelines recommend, decreasing the likelihood that strategic status determinations based on this information are accurate. Furthermore, we also found that NMFS could often not identify the level of precision for fishery-related mortality estimates. Specifically, we found that Approximately 48 of 74 stocks had population size estimates—used to determine maximum removal levels—that were less precise than NMFS guidelines recommend. According to NMFS officials, one reason for the lack of precision is that the agency did not have adequate funding to conduct thorough population surveys. When conducting a marine mammal population survey, scientists document how frequently they observe marine mammals during a set period of time and use this information to estimate total population size. The duration of the survey and the number of scientists observing different areas within the stock’s natural habitat affect the extent to which the survey is thorough and the population estimate is precise. Scientists could not calculate the precision of fishery-related mortality estimates—used to determine human-caused mortality estimates––for approximately 48 of the 74 stocks. In addition, the estimates for approximately 24 of the remaining 26 stocks were less precise than NMFS guidance recommends. Specifically, precision cannot be calculated when the sources of mortality data are anecdotal or the fishery-related mortality estimate is zero. For these cases, NMFS does not have a systematic way of determining how precise the estimates are and therefore how much certainty it should have in their accuracy. NMFS and Marine Mammal Commission officials identified inadequate observer coverage as one of the main reasons for imprecise mortality estimates. According to National Observer Program officials, 52 percent of Category I or II fisheries have observer coverage; however, only 27 percent of Category I or II fisheries have adequate or near-adequate coverage levels. Without adequate observer coverage in fisheries likely to cause incidental take of marine mammals, estimates will be less precise because they will be based on fewer data. NMFS and Marine Mammal Commission officials also stated that current funding levels for the observer program are inadequate to gather enough data on fishery-related mortality. For the stocks for which we found that NMFS could calculate the level of precision for population size or fishery-related mortality estimates but these estimates were less precise than NMFS’s guidance recommends, NMFS policy guidelines directed scientists to make adjustments to these estimates that increased the likelihood that the stocks were categorized as strategic. By doing this, the imprecision in these estimates is less likely to lead NMFS to overlook a stock that should be covered by a take reduction team, but NMFS officials told us that it is possible these stocks would not be designated as strategic if more precise estimates had been available and therefore these adjustments had not been necessary. However, in the approximately 48 of 74 stocks where NMFS cannot calculate the precision of a fishery-related mortality estimate––even though high levels of imprecision may exist––it cannot make these adjustments and therefore may either overlook some stocks that should be designated as strategic or inaccurately designate others as nonstrategic. Figure 2 summarizes key data limitations identified earlier in this report. NMFS officials acknowledged limitations in the information available to determine strategic status and the potential consequences, but identified funding constraints as an impediment to addressing these limitations. Specifically, a NMFS official stated that the agency has insufficient data to make informed management decisions regarding marine mammals in most instances, and said that a stock with sufficient data is an exception. However, while NMFS officials acknowledged these significant data limitations and their potential consequences, they also stated that they use the best scientific information available to make these determinations, as required by the MMPA. In addition, NMFS and Marine Mammal Commission officials stated that funding constraints have limited the agency’s ability to gather the data that it needs to make accurate decisions about which stocks meet the statutory requirements for establishing take reduction teams. NMFS has taken some steps to identify data limitations and proposed some actions to alleviate them. For example, a 2004 NMFS study found that the agency must significantly enhance the quantity and quality of its stock assessment data and analyses to meet MMPA mandates and outlined the actions and resource increases necessary to achieve these enhancements. According to NMFS officials, the agency received funding to begin implementing the study’s recommendations in fiscal year 2008 but the program lost other funding sources, so the new funding did not result in an overall increase in resources to improve these data. In addition, NMFS is currently completing a study to assess its sources of fishery- related mortality information. According to agency documents, this report will include an evaluation of the adequacy of the scientific techniques and existing observer coverage levels used to collect these data. Nonetheless, marine mammal scientists expressed interest in having more information about the quality of the data used to determine the strategic status for each stock. Specifically, Marine Mammal Commission officials supported implementing a process to identify stocks where the scientists have low confidence in the quality of the data. According to these officials, if this occurred, interested parties would gain a better understanding of the data underlying strategic status determinations and therefore would have more information to judge the usefulness of the conclusions made from those data. Also, marine mammal scientists said that a process to identify stocks with poor data could make it easier to highlight stocks in need of additional data collection efforts. On the basis of NMFS’s available information, we identified 30 marine mammal stocks that met the MMPA’s requirements for establishing a team, and NMFS has established six teams that cover 16 of them. NMFS has not established teams for the 14 other marine mammals that have met the MMPA’s requirements for establishing a team for several reasons: (1) the agency lacked sufficient funds to establish a team, (2) information about the stock’s population size or mortality is outdated or incomplete and the agency lacks funds to obtain better information, (3) commercial fisheries account for little or no incidental take, or (4) the population size is increasing; therefore establishing a team for the stock is a lower priority. Since 1994, NMFS has established eight take reduction teams, six of which are still in existence––the Atlantic Large Whale, Atlantic Trawl Gear, Bottlenose Dolphin, Harbor Porpoise, Pacific Offshore Cetacean, and Pelagic Longline. These six teams cover 16 marine mammal stocks that have met the MMPA’s requirements for establishing a take reduction team. The MMPA gives NMFS discretion to determine how teams can be structured. For example, NMFS can establish a take reduction team for (1) one stock that interacts with multiple fisheries, such as the Bottlenose Dolphin take reduction team; (2) multiple stocks within a region, such as the Atlantic Large Whale take reduction team; or (3) multiple stocks that interact with one fishery, such as the Pacific Offshore Cetacean take reduction team. The existing take reduction teams—five of which are located in the Atlantic region and one in the Pacific—are described in table 2. NMFS has not established take reduction teams for 14 other marine mammals that have also met the MMPA’s requirements for the establishment of a take reduction team. Table 3 lists these 14 marine mammals. NMFS has not established teams for these 14 marine mammal stocks for the following reasons: Lack of funding. Specifically, NMFS officials told us they did not establish a take reduction team for one marine mammal––the false killer whale––due to lack of funding. False killer whales found in the waters off the Hawaiian Islands have met the MMPA’s requirements for establishing a team since 2004 because the stock has been strategic and interacts with a Category I longline fishery. Furthermore, since 2004, estimates of fishery- related mortality of false killer whales are at levels greater than their maximum removal level, according to stock assessment reports. According to the most recently available information, the false killer whale is the only marine mammal for which incidental take by commercial fisheries is known to be above its maximum removal level that is not covered by a take reduction team. Since 2003, the Pacific Scientific Review Group has recommended that NMFS establish a team for these whales. Although NMFS officials told us that in accordance with the MMPA, the false killer whales are their highest priority for establishing a team, they said the agency does not have the funds to do so. NMFS officials told us the agency instead decided to focus what they characterized as their very limited funding on the already established take reduction teams. However, in the absence of a take reduction team, the Hawaii longline fishery continues to operate without a take reduction plan designed to reduce incidental take of false killer whales. Outdated or incomplete data. NMFS has not established take reduction teams for eight marine mammals that interact with commercial fisheries in the Gulf of Mexico and the waters off of Alaska’s coast because the information the agency has on them is too outdated or incomplete for agency officials to determine whether these marine mammals should be considered a high priority for establishing a take reduction team. Also, take reduction team members need better information about mortality before they can propose changes to fishing practices in a draft take reduction plan. However, because take reduction teams have not been established for these eight marine mammal stocks, fisheries continue to operate without take reduction plans that could decrease incidental take of these stocks. Specifically, NMFS has not established teams for two stocks of bottlenose dolphins found in the Gulf of Mexico and six stocks in the waters off Alaska’s coast, including three stocks of harbor porpoises, two stocks of Steller sea lions, and one stock of humpback whales. Two stocks of bottlenose dolphins found in the Gulf of Mexico have met the MMPA’s requirements for establishing a team since 2005 because they have been strategic and interact with two Category II fisheries. According to stock assessment reports, the best scientific information available about population size for these two stocks is 8 years old or older. According to NMFS documents, using such outdated information increases the possibility that significant population changes of which NMFS is unaware could have occurred. Agency officials told us that the 2008 survey to collect new population size estimates was canceled due to insufficient funding. Furthermore, according to stock assessment reports, the available mortality estimates are incomplete because they are based on anecdotal information. Consequently, scientists can use this information only to make a minimum estimate of the number of marine mammals being killed or injured. Agency officials told us they would like to begin observing the two Gulf of Mexico fisheries, but are currently unable to do so due to funding constraints. Similarly, NMFS has not established take reduction teams due to outdated information for three stocks of harbor porpoises found in the waters off Alaska’s coast that have met the MMPA’s statutory requirements for establishing a team since 2006 because they have been strategic and interact with multiple Category II fisheries. According to stock assessment reports, the best scientific information available about population size for harbor porpoises is outdated because the estimates are 8 years old or older. NMFS officials told us harbor porpoises are a major conservation concern for the agency, but they said funding constraints have limited their ability to collect new population size estimates for these marine mammals. In addition, NMFS has not established take reduction teams due to incomplete information for two stocks of Steller sea lions that have met the MMPA’s requirements for establishing a team since 1996 because they have been strategic and interact with multiple Category II fisheries. NMFS officials told us the fishery-related mortality information for these stocks is incomplete because they are uncertain whether incidental take is occurring in commercial fisheries not covered by observer programs. According to these same officials, lack of funding has limited the agency from obtaining more complete fishery-related mortality information for Steller sea lions. Last, NMFS has not established a take reduction team due to outdated information for the Western North Pacific stock of humpback whales, which has met the MMPA’s requirements for establishing a team since 2006, because it has been strategic and interacts with two Category II fisheries. According to the stock assessment report, the best scientific information available about population size for these humpback whales is outdated because it is 8 years old or older, but agency officials told us funding constraints limit their ability to collect new information. Commercial fisheries account for little or no incidental take. NMFS has not established teams for four marine mammals––the Hawaii stock of sperm whales, Western North Atlantic stocks of Cuvier’s beaked whales and Mesoplodont beaked whales, and East North Pacific stock of northern fur seals––that have met the MMPA requirements for establishing a team because, according to agency officials, commercial fisheries account for little or no incidental take of these stocks. According to our analysis, these sperm whales meet the statutory requirements for a team because they are listed as an endangered species under the ESA, and therefore are a strategic stock, and they interact with a Category I fishery. However, NMFS officials told us that the commercial fishery with which these sperm whales interact accounts for little or no incidental take, and therefore it would be inappropriate to establish a team for them. Similarly, NMFS’s 2007 stock assessment reports state that acoustic activities, such as sonar used by the U.S. Navy, may contribute to the mortality and serious injury of Cuvier’s and Mesoplodont beaked whales, and non-human-related causes of death that are unknown to scientists are contributing to the population decline of northern fur seals. NMFS officials told us it would be inappropriate to establish take reduction teams for these marine mammal stocks because mortality and serious injuries are not being caused by interaction with a commercial fishery. According to NMFS officials, they proposed amending the MMPA in 2005 to require that take reduction teams be established only for strategic stocks that interact with Category I or II fisheries and that have some level of fishery-related incidental take of those stocks, but Congress took no action on the proposal at that time. Population size is increasing. NMFS officials said they have not established a take reduction team for one marine mammal stock that meets the statutory requirements––the Central North Pacific stock of humpback whales––because of insufficient funding; however, this stock would be a low priority because the stock’s population size is increasing. This stock is strategic because it is listed as an endangered species under the ESA and it interacts with a Category I fishery off the coast of Hawaii and multiple Category II commercial fisheries in the waters off Alaska’s coast. However, because its population size is increasing, NMFS officials consider the stock to be a lower priority for establishing a team than stocks with declining populations. For the five take reduction teams subject to the MMPA’s deadlines, NMFS has had limited success in meeting the deadlines for a variety of reasons. NMFS did not meet the statutory deadlines to establish take reduction teams for three of the five teams, in one case due to a lack of information about population size or mortality. In addition, two of the five teams did not submit their draft take reduction plans to NMFS within the statutory deadlines, in one case because the team could not reach consensus on a plan. NMFS did not publish proposed take reduction plans within the statutory deadlines for any of the five teams because of the time needed to complete the federal rulemaking process, among other things. However, NMFS has complied with the statutory deadlines for the public comment periods on the proposed plans for all five teams. Finally, NMFS did not publish final take reduction plans within the statutory deadlines for four of the five teams because of the time associated with analyzing public comments, among other things. According to the MMPA, NMFS has 30 days to establish a take reduction team after a stock is listed as strategic in a final stock assessment report and is listed as interacting with a Category I or II fishery in the current list of fisheries. NMFS established two teams within this statutory deadline: the Harbor Porpoise and Pacific Offshore Cetacean. However, NMFS did not meet the statutory deadlines for establishing three teams—the Atlantic Large Whale, Pelagic Longline, and Bottlenose Dolphin. These teams were established from 3 months to more than 5 years after their statutory deadlines (see table 4). According to NMFS officials, the reasons for delays in establishing these take reduction teams include the following: Atlantic Large Whale: It took NMFS officials more than 30 days to identify sufficient take reduction team members to represent the stocks’ large habitat, which stretches from Maine to Florida. Pelagic Longline: After 2001, NMFS officials were waiting to see if modifications to the longline fishery, intended to reduce the incidental take of billfish and sea turtles, would also reduce incidental take of pilot whales, which would eliminate the need for this team. However, in 2002, an environmental group sued NMFS because of the agency’s alleged failure to establish take reduction teams for marine mammals that met the statutory requirements. According to an agreement settling the lawsuit, NMFS had to conduct surveys and develop new population size estimates for pilot whales. In addition, it had to establish a take reduction team for the Atlantic portion of a large pelagic longline fishery by June 30, 2005. Bottlenose Dolphin: NMFS lacked information about population size and mortality for bottlenose dolphins that take reduction team members need to consider before they can propose changes to fishing practices in a draft take reduction plan, and NMFS scientists recommended that the agency obtain better information before establishing a team. According to a NMFS official, mortality information for bottlenose dolphins collected between 1995 and 1998 was published in the 2000 stock assessment report. As a result of this new information, NMFS established a team in 2001. According to the MMPA, after NMFS establishes a take reduction team, the team must develop a draft take reduction plan and submit it to NMFS within 6 months if it covers strategic stocks whose level of human-caused mortality exceeds the maximum removal level. However, if the level of human-caused mortality for strategic stocks covered by the plan is below the maximum removal level, as it is for the Pelagic Longline team, then the team has 11 months to develop a draft plan and submit the draft plan to NMFS. Three of the five teams submitted their draft plans within the statutory deadlines. However, two teams—the Pelagic Longline and Bottlenose Dolphin—submitted their draft take reduction plans to NMFS, 17 and 23 days respectively, after their statutory deadlines (see table 5). Table 5 shows the delays in developing and submitting draft plans for the two take reduction teams that missed the statutory deadline. According to NMFS officials, the reasons for delays in submitting draft take reduction plans to NMFS include the following: Pelagic Longline: The unexpected death of a take reduction team member 1 week before the plan’s due date delayed the team’s submission to NMFS. This team member was a key liaison to the fishing industry, working with commercial fishermen to obtain agreement on potential take reduction plan measures. Bottlenose Dolphin: The take reduction team found it difficult to reach consensus about modifications to fishing practices to help reduce incidental take because of the large number of team members involved (44) representing multiple types of fisheries. For example, the Bottlenose Dolphin team includes four gillnet, one trap/pot, two seine, and two stop/pound net fisheries, making it difficult to agree on modifications to fishing practices. See appendix II for a description of these fishing techniques. According to the MMPA, once NMFS receives a draft take reduction plan, it must publish a proposed plan and implementing regulations in the Federal Register within 60 days. NMFS missed the statutory deadline for publishing proposed plans and implementing regulations for all five teams by 5 days to more than 2 years after the statutory deadlines (see table 6). According to NMFS officials, the reasons for delays in publishing proposed plans and implementing regulations include the following: Atlantic Large Whale: Agency officials submitted the proposed plan for publication within the statutory deadline but told us that the Federal Register did not print the notice containing the proposed take reduction plan until 5 days after the statutory deadline. Pacific Offshore Cetacean: The former team coordinator for this team said that the proposed plan was delayed because of the time it took to comply with various applicable laws. For example, NMFS is required to review the proposed plan and consider its effects on small businesses and other small entities under the Regulatory Flexibility Act and prepare an environmental assessment under the National Environmental Policy Act, among other requirements. Developing and drafting an environmental assessment is a labor-intensive process requiring coordination among scientists, economists, and policymakers. Harbor Porpoise: According to NMFS officials, they delayed preparing the proposed plan for publication in the Federal Register because they were waiting to see if closures of some fishing areas to protect fish would also reduce incidental take of harbor porpoises. In addition, NMFS scientists determined that this stock of harbor porpoises was migratory and interacting not only with the Gulf of Maine fisheries but with mid- Atlantic fisheries as well. As a result of this finding, NMFS established another team, the Mid-Atlantic take reduction team, for the mid-Atlantic fisheries. NMFS delayed the publication of the proposed take reduction plan for the Gulf of Maine fisheries until the Mid-Atlantic team developed and submitted a draft plan. Ultimately, the two plans were combined and published as a single plan for both the Gulf of Maine and mid-Atlantic fisheries. Pelagic Longline: According to NMFS officials, a combination of factors caused the proposed plan to be published in the Federal Register almost 2 years after the deadline. Take reduction team coordinators are responsible for coordinating NMFS’s internal review and approval for take reduction plans, crafting the regulatory language for the plan, and submitting the proposed plans for publication in the Federal Register. Because the team coordinator position was vacant for approximately 16 months, completion of these tasks was delayed. Bottlenose Dolphin: A combination of factors caused this proposed plan to be published in the Federal Register 2 years after the deadline, according to NMFS officials. The publication of the proposed plan was delayed because NMFS asked team members to reconvene when it became clear that the recommended regulatory measures would not reduce incidental take to levels below the maximum removal level, as required by the MMPA. Although NMFS can propose a plan of its own that deviates from the team’s draft plan, officials from NOAA’s Office of General Counsel told us NMFS prefers to wait until the team completes its work and submits a draft plan. After they reconvened, the take reduction team members developed and submitted a revised draft plan; however, because the team coordinator position was vacant for about 8 months, preparing the proposed plan for publication was delayed. Additionally, because NMFS combined two rules––to benefit both sea turtles and bottlenose dolphins––into one, the proposed plan was delayed due to the time needed to update an environmental assessment required under the National Environmental Policy Act and other associated documents. Also, the proposed plan was delayed because of the time it took to comply with various laws and executive orders. Finally, the Office of Management and Budget took 90 days to review the proposed plan—the maximum time allowed for such a review. This review by itself exceeded the MMPA’s 60- day deadline. NMFS officials told us it is extremely difficult for the agency to meet the MMPA’s deadline for this step in the process. As the examples above demonstrate, NMFS officials provided us with a variety of reasons for delays in meeting the statutory deadlines for publishing proposed plans in the Federal Register; however, the agency has not conducted a comprehensive analysis that would identify all of the tasks that must be completed during this stage in the process, along with the total time needed to complete them. NMFS stated that it has not conducted such an analysis because, in some cases, the documents needed are 10 years old and are not available electronically. According to the MMPA, NMFS must hold a public notice and comment period on the proposed plan and implementing regulations for up to 90 days after the proposed plan’s publication in the Federal Register. The public comment period is an opportunity for interested persons to participate in the development of a take reduction plan by submitting their views and concerns about the proposed plan. For all five teams—the Atlantic Large Whale, Bottlenose Dolphin, Harbor Porpoise, Pacific Offshore Cetacean, and Pelagic Longline—NMFS has complied with the statutory deadline each time. According to the MMPA, once the public comment period ends, NMFS must publish the final plan and implementing regulations in the Federal Register within 60 days. NMFS missed the statutory deadline for four teams but met it for the Harbor Porpoise team. According to our analysis, the delays ranged from 8 days to just over 1 year (see table 7). According to NMFS officials, the reasons for delays in publishing final plans and implementing regulations in the Federal Register include the following: Atlantic Large Whale: The delay was due, in part, to NMFS’s efforts in responding to the large number of public comments received on the proposed plan. Pacific Offshore Cetacean: Because the plan included a fishing gear modification, NMFS waited until the preliminary results of a gear research experiment indicated that the modification reduced incidental take before publishing the final plan. The experiment tested the effectiveness of acoustic devices, known as pingers, that are attached to fishing nets and emit high-pitched sounds so that marine mammals would avoid the nets. Bottlenose Dolphin: According to NMFS officials, the delay was the result of the time needed to review and analyze over 4,000 comments the agency received during the public comment period and the 90 days the Office of Management and Budget took to review the final take reduction plan before NMFS could publish it in the Federal Register. NMFS does not have a comprehensive strategy––identified as a key principle by the Government Performance and Results Act of 1993––for assessing the effectiveness of take reduction regulations once they have been implemented. The Government Performance and Results Act of 1993 provides a foundation for examining agency performance goals and results. Our work related to the act and the experience of leading organizations have shown the importance of developing a comprehensive strategy for assessing program effectiveness that includes, among other things, program performance goals that identify the desired results of program activities and reliable information that can be used to assess results. In the context of NMFS’s efforts to measure the success of take reduction plans and implementing regulations, such a strategy would include, at a minimum, (1) performance goals that identify the desired outcomes of the take reduction regulations; (2) steps for assessing the effectiveness of potential take reduction regulations, such as fishing gear modifications, in achieving the goals; (3) a process for monitoring the fishing industry’s compliance with the requirements of the take reduction regulations; and (4) reliable data assessing the regulation’s effect on achieving the goals. Instead of such a comprehensive strategy, we found that although NMFS uses short- and long-term goals established by the MMPA to evaluate the success of take reduction regulations, these goals and the data that NMFS uses to measure the impact of the take reduction regulations have a number of limitations. In addition, while NMFS has taken steps to identify the impact of proposed take reduction regulations prior to their implementation, the agency has limited information about the fishing industry’s compliance with the regulations once they have been implemented. As a result, when incidental takes occur in fisheries covered by take reduction regulations, it is difficult for NMFS to determine whether the regulations were not effective in meeting the MMPA’s goals or whether the fisheries were not complying with the regulations. The MMPA identifies, and NMFS further defines, short- and long-term goals for reducing incidental take of marine mammals that take reduction regulations should achieve. Specifically, the MMPA set a short-term goal of reducing incidental take––also known as fishery-related mortality––for strategic stocks below the maximum removal level within 6 months of a plan’s implementation and set a long-term goal of reducing fishery-related mortality to insignificant levels approaching a zero mortality and serious injury rate within 5 years of a plan’s implementation, which NMFS generally defines as 10 percent of the maximum removal level. NMFS officials told us that NMFS staff and take reduction team members review whether or not the goals have been met for the stocks covered by their teams. However, NMFS officials, Marine Mammal Commission officials, and a Scientific Review Group chair all considered the 6-month time frame for meeting the short-term goal to be unrealistic. Specifically, some noted that due to the extensive time needed to gather and publish data on the maximum removal level and fishery-related mortality estimates, NMFS does not have the necessary information to assess the goal within the 6- month time frame. A NMFS official also noted that fishing changes over the year; therefore, assessing whether fishery-related morality is below the maximum removal level during a 6-month time frame may not consider mortality that may occur during both the busiest and the slowest fishing seasons. While the MMPA sets this 6-month goal, it does not impose consequences on NMFS or the regulated fisheries if the goal is not met. Furthermore, these goals may not help NMFS assess the success of the regulations because we found that there was not always greater success in meeting the goals after take reduction regulations were implemented than before they were implemented. Also, if the goals had been met for a stock in a given year, in some cases the goals did not continue to be met in the following years. Specifically, we found that for two stocks, the short-term goal had been met prior to the regulations being implemented but was no longer being met in 2007. In addition, for two other stocks, the long-term goal had been met prior to implementation of the regulations, but was no longer being met in 2007. Furthermore, for two stocks, the short-term goal had been met and for two stocks, the long-term goal had been met in 2007, but these goals had already been met prior to implementation of the take reduction regulations. In cases where the goals were met prior to the implementation of take reduction regulations, the goals cannot be used to determine the regulations’ impact on reducing take. In addition, according to NMFS officials, changes to the marine environment that happen during the same time period as the implementation of take reduction regulations make it difficult to assess whether the regulations are the reason that the short- and long-term goals for a stock have been achieved or whether it was other changes. Specifically, state or federal fishing regulations unrelated to the take reduction team process may result in less fishing in the fisheries covered by the take reduction team. In such a scenario, fishery-related mortality may decrease because there are fewer opportunities for fishing vessels to interact with marine mammals. Therefore, a lower level of fishery-related mortality may lead to achievement of the MMPA’s goals for a stock even if the take reduction regulations themselves were not the primary reason for the lower level of incidental take. Moreover, limitations in some of the data used to determine whether the MMPA’s short- and long-term goals for reducing incidental take by commercial fisheries have been met may lead to inaccurate conclusions about the effectiveness of the take reduction regulations. We reviewed the stock assessment reports for 9 of the 10 strategic stocks and all 3 of the nonstrategic stocks covered by take reduction regulations and found that for 2007, the short-term goal for 4 of the 9 strategic stocks had been achieved and the long-term goal had been achieved for 3 of the 12 strategic and nonstrategic stocks. However, we also found that the information used to determine the maximum removal level or the fishery-related mortality estimate for 6 of the 9 strategic stocks covered by these regulations was less precise than NMFS guidelines recommend. Because these are the two key sources of information for determining whether the MMPA’s short- and long-term goals have been met, this imprecision may cause NMFS to incorrectly assess whether the take reduction regulations have been successful. NMFS officials stated that limitations in the data make it difficult to know the reason for changes in meeting the goals from one year to another. For example, we found that the short-term goal for the Gulf of Maine stock of humpback whales covered by the Atlantic Large Whale take reduction team had been met prior to implementation of the take reduction regulations; however, according to the stock assessment report, it did not meet the goal in 2007. Meanwhile, between the year prior to when the regulations were implemented and 2007, NMFS altered its stock definition for these marine mammals in a way that decreased the number of animals included in the population size estimate—a key aspect of determining the maximum removal level. This change made the maximum removal level much lower than it had been before the regulations were implemented, making it more difficult to achieve the goals. Because of this change in NMFS’s approach to calculating the maximum removal level, it is difficult to determine whether ineffectiveness of the take reduction regulations or the change in the maximum removal level led to the short-term goal no longer being met for this stock. NMFS has assessed the likelihood that proposed take reduction regulations would achieve the short- and long-term goals of reducing incidental take for all four teams with final take reduction plans and regulations. Specifically, for all four plans, scientists evaluated whether key proposed measures for modifying fishing gear or changing the times or areas where fishing could occur were likely to decrease incidental take. For example, NMFS scientists analyzed data from previous incidental take in the gillnet fisheries of concern for bottlenose dolphins and found that incidental take had occurred at a higher rate on the vessels that used nets with larger mesh openings. Because this type of gear would be restricted under the proposed regulations, NMFS had reason to believe that these gear restrictions would result in reduced incidental take of bottlenose dolphins. Similarly, according to the environmental assessment report for the Harbor Porpoise take reduction team, a controlled experiment tested the effectiveness of acoustic devices—often called pingers—attached to fishing nets. Pingers emit a high-pitched sound that harbor porpoises can hear, which results in them avoiding fishing nets. This experiment allowed NMFS scientists to predict that proposed regulations to implement pingers would likely result in a decline of incidental take. Although NMFS has conducted some assessments of the likelihood that proposed take reduction regulations will achieve the goals of reducing incidental take, they have limited information about the extent to which fisheries comply with take reduction regulations once they have been implemented. As a result, when incidental takes occur in fisheries covered by take reduction regulations, it is difficult for NMFS to determine whether the regulations were not effective in meeting the MMPA’s goals or whether the fisheries were not complying with the regulations. Specifically, we determined that NMFS does not have comprehensive approaches to measure the extent to which fisheries comply with the regulations for the four take reduction plans that it has implemented. However, for all of these implemented regulations, NMFS has some— albeit limited—information from fisheries observer or enforcement programs that provide an indication of whether fisheries are complying with the regulations. For example, when incidental take of harbor porpoises in the fisheries covered by the Harbor Porpoise take reduction team recently increased, NMFS scientists used observer information about incidental take to determine whether or not these takes occurred when vessels were complying with the requirement to use pingers on their nets. However, according to the scientists, the usefulness of this information in determining actual compliance was limited because observers could only identify whether the pingers were attached to the net, not whether these pingers functioned properly. On the Pacific Offshore Cetacean team, the team coordinator stated that in the past, NMFS has received information from the observer program about fishing vessels monitored by observers that were not in compliance with the take reduction regulations. However, she stated that NMFS does not routinely review the observer information to identify or document the extent of these instances or estimate the extent of overall compliance with the take reduction regulations. In addition to the information that it receives from the observer programs, NMFS receives some information about compliance from NOAA’s Office of Law Enforcement, the U.S. Coast Guard, or state enforcement agencies. Specifically, team coordinators told us that officials from the U.S. Coast Guard attend take reduction team meetings to discuss instances where the agencies found vessels out of compliance with take reduction regulations during the course of their enforcement work. However this information is not generally extensive enough to provide overall assessments of the extent to which fisheries are complying with the regulations. In 2007, we reported that NMFS lacked a strategy for assessing industry compliance with the Atlantic Large Whale team’s take reduction plan, and we recommended that it develop one. In response to our report, the team is beginning to develop a comprehensive approach to monitoring compliance. NMFS staff members are currently developing a plan for take reduction team members to review during their next meeting, which is planned for early 2009. No other take reduction teams are developing comprehensive approaches for monitoring compliance at this time. NMFS faces a very large, complex, and difficult task in trying to protect marine mammals from incidental mortality and serious injury during the course of commercial fishing operations, as the MMPA requires. Without comprehensive, timely, and accurate population and mortality data for the 156 marine mammal stocks that NMFS is charged with protecting, NMFS may be unable to accurately identify stocks that meet the legal requirements for establishing take reduction teams, thereby depriving them of the protection they need to help recover or maintain healthy populations. Conversely, unreliable data may lead NMFS to erroneously establish teams for stocks that do not need them, wasting NMFS’s limited resources. For those stocks that meet the requirements for establishing take reduction teams, it is important that NMFS adhere to the deadlines in the MMPA, as delays in establishing teams and developing and finalizing take reduction plans could result in continued harm to already dwindling marine mammal populations. However, we recognize that it may not be useful to establish take reduction teams for those stocks that meet the MMPA requirements but whose population declines are not being caused by commercial fisheries. We also acknowledge it may not be possible for NMFS to meet some of the MMPA’s deadlines given the requirements of other laws that NMFS must comply with in developing take reduction plans and the need for various levels of review and approval. Nonetheless, the MMPA’s deadlines are clear, and unless the law is amended to address these situations, NMFS has a legal obligation to comply with them. Finally, for stocks for which NMFS has developed take reduction regulations, it is important for NMFS to assess their effectiveness in reducing serious injury and mortality to acceptable levels. Doing so will require more comprehensive information about the fishing industry’s compliance with take reduction regulations so that if the short- and long- term goals are not met, NMFS knows whether to attribute the failure to a flaw in the regulations or to noncompliance with them. Without a comprehensive strategy for assessing the effectiveness of its take reduction plans and implementing regulations and industry’s compliance with them, NMFS may be missing opportunities to better protect marine mammals. To facilitate the oversight of NMFS’s progress and capacity to meet the statutory requirements for take reduction teams, Congress may wish to consider taking the following three actions: direct the Assistant Administrator of NMFS to report on major data, resource, or other limitations that make it difficult for NMFS to accurately determine which marine mammals meet the statutory requirements for establishing take reduction teams; establish teams for stocks that meet these requirements; and meet the statutory deadlines for take reduction teams; amend the statutory requirements for establishing a take reduction team to stipulate that not only must a marine mammal stock be strategic and interacting with a Category I or II fishery, but that the fishery with which the marine mammal stock interacts causes at least occasional incidental mortality or serious injury of that particular marine mammal stock; and amend the MMPA to ensure that its deadlines give NMFS adequate time to publish proposed and final take reduction plans and implementing regulations while meeting all the requirements of the federal rulemaking process. We recommend that the Assistant Administrator of NMFS develop a comprehensive strategy for assessing the effectiveness of each take reduction plan and implementing regulations, including, among other things, establishing appropriate goals and steps for comprehensively monitoring and analyzing rates of compliance with take reduction measures. We provided a draft copy of this report to the Secretary of Commerce for review and comment. In response to our request, we received general, technical, and editorial comments from NOAA by email, which stated that the agency agreed with our recommendation that NMFS should develop a comprehensive strategy for assessing the effectiveness of each take reduction plan and the implementing regulations. We have incorporated the technical and editorial comments provided by the agency, as appropriate. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Commerce, the Administrator of NOAA, and appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-3841 or mittala@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objectives of this review were to determine the extent to which (1) available data allow the National Marine Fisheries Service (NMFS) to accurately identify the marine mammal stocks that meet the Marine Mammal Protection Act’s (MMPA) requirements for establishing take reduction teams, (2) NMFS has established take reduction teams for those marine mammal stocks that meet the statutory requirements, (3) NMFS has met the statutory deadlines established in the MMPA for the take reduction teams subject to the deadlines and the reasons for any delays, and (4) NMFS has developed a comprehensive strategy for evaluating the effectiveness of the take reduction plans that have been implemented. To determine the extent to which available data allowed NMFS to accurately identify marine mammal stocks that meet the MMPA’s requirements for establishing take reduction teams, we identified stocks for which NMFS lacked data on either the human-caused mortality and serious injury estimate (hereafter referred to as human-caused mortality estimate) or the potential biological removal levels (hereafter referred to as maximum removal levels). To do this, we first reviewed all 156 stocks identified in NMFS’s 2007 stock assessment reports and removed 19 stocks currently covered by take reduction teams. Then we removed 24 stocks that are listed as endangered or threatened under the Endangered Species Act (ESA) or designated as depleted under the MMPA because NMFS does not use information about human-caused mortality and the maximum removal level to make strategic status decisions for these stocks. We then reviewed the remaining 113 stocks to identify those that lacked either a human-caused mortality estimate or a maximum removal level. After identifying those that lacked human-caused mortality or maximum removal levels, we reviewed a sample of the remaining 74 stocks that did have these determinations to assess the reliability of the information used to determine human-caused mortality estimates and maximum removal levels. We identified several key data elements in NMFS’s stock assessment reports that the agency uses to determine human-caused mortality estimates and maximum removal levels: abundance estimates (population size estimates) and NMFS calculation of the estimates’ precision, the age of data used to calculate population size estimates, fishery-related mortality and serious injury estimates (hereafter known as fishery-related mortality estimates) and NMFS calculation of the estimates’ precision, adjustments made to the maximum removal level in order to account for fishery-related mortality estimate imprecision, information sources such as observer data used to calculate fishery- related mortality estimates, and qualitative information identified in the stock assessment reports about scientists’ concerns regarding data strengths or limitations. We also identified criteria for assessing the quality of these data elements using information from the MMPA and publications such as NMFS’s guidelines for preparing stock assessment reports and stock assessment improvement plan and confirmed the criteria with NMFS officials. While scientists and publications also identified bias in population size and mortality estimates as a potential data reliability problem, we did not assess the extent to which existing data sources included bias because data and accompanying criteria to make such an assessment were not available. We then analyzed the key data elements for a sample of stocks to determine the extent to which the data met the criteria we identified. We chose our sample of stocks to review by conducting a stratified random sample of the 74 stocks that were not currently covered by take reduction teams, did not receive strategic status due to MMPA designations or listings under the ESA, and had both human-caused mortality and serious injury estimates and maximum removal levels. The sample of 28 stocks included all strategic stocks that met these criteria as well as a representative sample of stocks from each of the three NMFS Fishery Science Centers responsible for publishing the stock assessment reports. We then extrapolated the results of our review for this sample to all 74 stocks that met the criteria listed above. We calculated 95 percent confidence intervals for each of the estimates made from this sample. The confidence intervals for these estimates are presented in table 8. We also spoke with NMFS and Marine Mammal Commission officials to identify the potential impacts of using unreliable information to determine human-caused mortality or maximum removal levels. In some cases, we found potentially conflicting information within individual stock assessment reports about whether fishery-related mortality was unknown or estimated as zero. In these cases, we used the information that NMFS provided in stock assessment report summary tables to resolve the inconsistencies within the individual stock assessment reports because we considered these estimates to be the agency’s final decision. In all cases, these summary tables identified the fishery-related mortality estimates for these stocks as zero rather than unknown. To determine the extent to which NMFS has established take reduction teams for those marine mammal stocks that meet the statutory requirements, we analyzed stock assessment reports for 1995 through 2007 and lists of fisheries for 1996 through 2008 and identified marine mammal stocks that met the statutory requirements for establishing take reduction teams. To do this, we reviewed the MMPA and identified the statutory requirements for establishing take reduction teams, then interviewed officials from the National Oceanic and Atmospheric Administration’s (NOAA) Office of General Counsel to verify that we had identified the correct requirements. We also analyzed the stock assessment reports and list of fisheries and identified all of the stocks that have met the statutory requirements, which include marine mammal stocks that (1) were listed as strategic according to a final stock assessment report and (2) interacted with a Category I or II fishery according to a current list of fisheries. We developed a database and used it to analyze this information. Once we identified the marine mammal stocks that met the statutory requirements, we verified with NMFS officials the stocks for which the agency has already established take reduction teams. On the basis of this information, we determined which stocks met the statutory requirements but are not covered by a team. We met with NMFS officials to review and verify our findings, and interviewed NMFS officials to obtain reasons why the agency has not established take reduction teams for these stocks. We also met with representatives of the Marine Mammal Commission to review our findings. To determine the extent to which NMFS has met the MMPA’s deadlines for the five take reduction teams subject to the deadlines and the reasons for any delays, we identified five key deadlines listed in the MMPA for NMFS and take reduction teams and interviewed officials from NOAA’s Office of General Counsel to confirm the deadlines; obtained and reviewed documentation, such as take reduction plans, Federal Register notices announcing the establishment of teams, and NMFS’s proposed and final take reduction plans and implementing regulations published in the Federal Register; analyzed the dates published in the Federal Register documents to determine whether each of the five take reduction teams had met their statutory deadlines; and, met with NMFS officials to confirm the accuracy of our analysis of information published in Federal Register notices. To determine the reasons for any delays in meeting the statutory deadlines, we interviewed take reduction team coordinators from NMFS’s Office of Protected Resources, officials from NOAA’s Office of General Counsel, marine biologists in NMFS’s Fishery Science Centers, and members of each of the five teams subject to the deadlines. We also obtained and reviewed NMFS documentation about various laws and executive orders that the agency must comply with when publishing proposed and final take reduction plans in the Federal Register. To determine the extent to which NMFS has developed a comprehensive strategy for evaluating the effectiveness of the take reduction plans that have been implemented, we reviewed the MMPA and relevant NMFS documentation and spoke with NMFS officials and Scientific Review Group chairs regarding the (1) performance goals used by NMFS to assess the success of take reduction regulations, (2) actions taken prior to implementing proposed regulations to increase the likelihood that the regulations will achieve these performance goals, and (3) extent to which NMFS has information about fisheries’ compliance with implemented take reduction regulations. We also reviewed stock assessment reports from 1995 through 2007 for stocks covered by three of the four take reduction teams with final regulations in place to determine whether the stocks met the short- and long-term goals in each of those years. To calculate whether the goals were met prior to implementation of the take reduction regulations, we used the last year for which the fishery-related mortality estimates in the stock assessment reports did not include any information about incidental take that was collected after the regulations were implemented. We excluded the strategic bottlenose dolphins from our review due to methodological differences between the way NMFS reports on fishery-related mortality and maximum removal levels for them versus for the other stocks. Specifically, due to concerns about the stock definition for the Western North Atlantic coastal bottlenose dolphins covered by the Bottlenose Dolphin take reduction team, NMFS further divides this population into management units. NMFS identifies different fishery-related mortality estimates for each of these management units, but not for the Western North Atlantic coastal bottlenose dolphins as a whole, making it difficult to determine whether the total population met the short- and long-term goals. In addition, we assessed the reliability of the data used to determine whether NMFS has met the goals for the strategic stocks covered by three of the four take reduction teams with final regulations. To do this, we analyzed the extent to which key data elements met data quality criteria identified by the MMPA and NMFS. We reviewed strategic stocks because they are most likely to be at continued risk of fishery-related take leading to unsustainable population declines. We also compared the data for the year prior to when the regulations were first implemented with the data from 2007 to identify any changes that occurred in meeting the goals before and after the take reduction regulations went into effect. We conducted this performance audit from September 2007 to December 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The table below presents information about select commercial fishing techniques, including the type of gear, how the injury or death occurs, and examples of marine mammals affected. In addition to the individual named above, Stephen D. Secrist, Assistant Director; Leo G. Acosta; Mark Braza; Carmen Donohue; Beth Faraguna; Rich Johnson; Alison O’Neill; Dae Park; Kim Raheb; Bruce Skud; Jeanette Soares; and Barbara Steel-Lowney made significant contributions to this report. | Because marine mammals, such as whales and dolphins, often inhabit waters where commercial fishing occurs, they can become entangled in fishing gear, which may injure or kill them--this is referred to as "incidental take." The 1994 amendments to the Marine Mammal Protection Act (MMPA) require the National Marine Fisheries Service (NMFS) to establish take reduction teams for certain marine mammals to develop measures to reduce their incidental takes. GAO was asked to determine the extent to which NMFS (1) can accurately identify the marine mammal stocks--generally a population of animals of the same species located in a common area--that meet the MMPA's requirements for establishing such teams, (2) has established teams for those stocks that meet the requirements, (3) has met the MMPA's deadlines for the teams subject to them, and (4) evaluates the effectiveness of take reduction regulations. GAO reviewed the MMPA, and NMFS data on marine mammals, and take reduction team documents and obtained the views of NMFS officials, scientists, and take reduction team members. Significant limitations in available data make it difficult for NMFS to accurately determine which marine mammal stocks meet the statutory requirements for establishing take reduction teams. For most stocks, NMFS relies on incomplete, outdated, or imprecise data on stocks' population size or mortality to calculate the extent of incidental take. As a result, the agency may overlook some marine mammal stocks that meet the MMPA's requirements for establishing teams or inappropriately identify others as meeting them. NMFS officials told GAO they are aware of the data limitations but lack funding to implement their plans to improve the data. On the basis of NMFS's available information, GAO identified 30 marine mammal stocks that have met the MMPA's requirements for establishing a take reduction team, and NMFS has established six teams that cover 16 of them. For the other 14 stocks, the agency has not complied with the MMPA's requirements. For example, false killer whales, found off the Hawaiian Islands, have met the statutory requirements since 2004, but NMFS has not established a team for them because, according to NMFS officials, the agency lacks sufficient funds. NMFS officials told GAO that the agency has not established teams for the other stocks that meet the MMPA's requirements for reasons such as the following: (1) data on these stocks are outdated or incomplete, and the agency lacks funds to obtain better information and (2) causes other than fishery-related incidental take, such as sonar used by the U.S. Navy, may contribute to their injury or death, therefore changes to fishing practices would not solve the problem. For the five take reduction teams subject to the MMPA's deadlines, the agency has had limited success in meeting the deadlines for establishing teams, developing draft take reduction plans, and publishing proposed and final plans and regulations to implement them. For example, NMFS established three of the five teams--the Atlantic Large Whale, Pelagic Longline, and Bottlenose Dolphin--from 3 months to over 5 years past the deadline. NMFS officials attributed the delays in establishing one of the teams to a lack of information about stock population size and mortality, which teams need to consider before developing draft take reduction plans. NMFS does not have a comprehensive strategy for assessing the effectiveness of take reduction plans and implementing regulations that have been implemented. NMFS has taken some steps to define goals, monitor compliance, and assess whether the goals have been met, but shortcomings in its approach and limitations in its performance data weaken its ability to assess the success of its take reduction regulations. For example, without adequate information about compliance, if incidental takes continue once the regulations have been implemented, it will be difficult to determine whether the regulations were ineffective or whether the fisheries were not complying with them. |
Amtrak was created by the Rail Passenger Service Act of 1970 to operate and revitalize intercity passenger rail service. Prior to Amtrak’s creation, such service was provided by private railroads, which had lost money, especially after World War II. The act, as amended, gave Amtrak a number of goals, including providing modern, efficient intercity passenger rail service; giving Americans an alternative to automobiles and airplanes to meet their transportation needs; and minimizing federal operating subsidies. Through fiscal year 1998, the federal government has provided Amtrak with over $20 billion in operating and capital subsidies, excluding $2.2 billion from the Taxpayer Relief Act. In December 1994, at the request of the administration, Amtrak established a goal of eliminating federal operating subsidies for Amtrak by 2002. To meet this goal and respond to continually growing losses and a widening gap between operating deficits and federal operating subsidies, Amtrak developed strategic business plans. These plans have attempted to increase revenues and control costs through such actions as expanding mail and express service and adjusting routes and service frequency. Amtrak also has restructured its organization into strategic business units. The Congress provided additional financial assistance to Amtrak in the Taxpayer Relief Act of 1997, enacted in August 1997. This act makes a total of about $2.2 billion available to Amtrak in 1998 and 1999 to acquire capital improvements, pay certain equipment maintenance expenses, and pay principal and interest on certain debt, among other things. In addition, the Amtrak Reform and Accountability Act of 1997, enacted in December 1997, makes certain reforms to Amtrak’s operations. These reforms include, among other things, (1) eliminating current labor protection arrangements on May 31, 1998; (2) repealing the ban on contracting out nonfood and beverage work; and (3) placing a $200 million cap on the amount of liability claims that can be paid as the result of an Amtrak accident. Amtrak’s financial condition has continued to deteriorate despite its efforts over the past 4 years to reduce losses. While Amtrak has reduced its net losses from about $892 million in fiscal year 1994 (in 1997 dollars)to $762 million in fiscal year 1997, it has not been able to close the gap between total revenues and expenses. (See fig. 1.) For example, while intercity passenger-related revenues grew by about 4 percent last year, intercity passenger-related expenses grew by about 7 percent. Notably, the net loss for fiscal year 1997 would have been much greater if Amtrak had not earned about $63 million, primarily from the one-time sales of real estate and telecommunications rights-of-way in the Northeast Corridor. Amtrak’s net loss for fiscal year 1998 will likely be substantially worse than in 1996 and 1997. In March 1998, Amtrak projected that the net loss for this year will be about $845 million, or $56 million more than budgeted. Amtrak’s financial deterioration can be seen in other measures as well. For example, Amtrak’s working capital—the difference between current assets and current liabilities—generally declined between fiscal years 1995 and 1997, from a deficit of $149 million to a deficit of $300 million. As figure 2 shows, at the end of fiscal year 1997, Amtrak’s working capital was the lowest it had been over the last 9 years. Declining working capital jeopardizes a company’s ability to pay its bills as they come due. The decline in working capital reflects an increase in accounts payable, short-term debt, and capital lease obligations, among other items. Amtrak’s poor financial condition has also affected its cash flow and its need to borrow money to make ends meet. In fiscal year 1997, Amtrak had to borrow $75 million to meet its operating expenses. The prospects in fiscal year 1998 are worse. Amtrak originally planned a cash-flow deficit of $100 million in fiscal year 1998; however, in January 1998, Amtrak increased this estimate to $200 million. This projected increase is primarily due to (1) reductions in expected revenues from Amtrak’s pilot express program ($47 million); (2) a liability for the wage increases provided by Amtrak’s recent agreement with the Brotherhood of Maintenance of Way Employees ($35 million); and, (3) an increase in accounts payable that resulted from deferring fiscal year 1997 payables to fiscal year 1998 ($16 million). Amtrak began borrowing in February 1998 to make ends meet. Amtrak will continue to face challenges to its financial health. Despite efforts to improve revenues and cut costs, the railroad continues to lose more money than it planned. This situation may get worse. Amtrak’s recent agreement with the Brotherhood of Maintenance of Way Employees is expected to increase Amtrak’s fiscal year 1998 labor costs by between $3 million to $5 million. According to Amtrak, extending this type of settlement to all of its labor unions could cost between $60 million and $70 million more each year than is currently planned, from fiscal years 1999 through 2002. Amtrak’s plans to reduce its financial losses by “growing” its way to financial health—that is, increasing revenues, rather than cutting train routes—may also encounter difficulty. These plans depend, at least in part, on expanding mail and express services. However, Amtrak’s efforts to increase its express business have been frustrated and it has had to reduce anticipated revenues in its express pilot program by $47 million. As a result, in January 1998 Amtrak increased its projected overall loss for fiscal year 1998 from $52 million to $99 million. Another Amtrak initiative—establishing high-speed rail service between New York City and Boston—also will not provide immediate financial benefits. In establishing high-speed rail transportation between these two cities, Amtrak expects to decrease travel time from 4-1/2 hours to 3 hours and significantly increase revenue and ridership. Amtrak’s goals are for the high-speed rail program to begin providing positive net income in fiscal year 2000. Amtrak will also continue to find it difficult to take actions to reduce costs, such as making route and service adjustments. During fiscal year 1995, Amtrak was successful in reducing and eliminating some routes and saving an estimated $54 million. In fiscal year 1997, Amtrak was less successful in taking such actions. Amtrak does not currently plan to reduce any more routes. Instead, it plans to fine-tune its route network. For example, in February 1998, Amtrak added a fourth train per week between Chicago and San Antonio on the Texas Eagle route, in part to accommodate expanded mail and express business. Amtrak is also planning to begin daily passenger rail service between Los Angeles and Las Vegas by January 1999. In explaining the rationale for attempting to increase revenues through fine-tuning Amtrak’s routes rather than through cutting back on service, Amtrak and Federal Railroad Administration (FRA) officials pointed to Amtrak’s mission of maintaining a national route system. They noted that such a system will consist of routes with a range of profitability, including poorer-performing routes that provide needed linkages to better-performing routes. Furthermore, poorer-performing routes may provide public benefits, such as serving small cities and rural areas. These officials stressed that cutting the routes with the worst performance could damage the national network and cause the loss of revenue on connecting routes. Amtrak has just begun a market analysis that could result in several alternatives for a national intercity passenger rail network. The decision to make route adjustments is a difficult one, even though Amtrak’s data show that only one of the railroad’s 40 routes (Metroliners between Washington, D.C., and New York City) covers all its operating costs. For the remaining 39 routes, Amtrak loses an average of $53 for each passenger. Amtrak data show that it loses over $100 per passenger on 14 of these routes, and only 5 routes covered their train costs in fiscal year 1997. However, Amtrak encounters opposition when it proposes to discontinue routes because of the desire by a range of interests to see passenger train service continued in potentially affected communities. In addition, Amtrak maintains that every route that covers its variable costs (costs of running trains) makes a contribution toward its substantial fixed costs. Finally, simply pruning Amtrak’s worst-performing routes could exacerbate Amtrak’s financial condition because eliminating one route is likely to affect ridership on connecting routes that are perhaps performing better. As a result of the Taxpayer Relief Act and funds requested through the appropriations process, record amounts of federal funds could be available to fund Amtrak’s capital improvement needs. However, Amtrak projects that it will still be short of the funds it believes are necessary to meet these needs. In addition, Amtrak plans to use a substantial portion of these funds to meet maintenance needs—needs that have traditionally been considered operating expenses. Finally, recently enacted reform legislation will likely have little financial impact in the short term. Capital investments will continue to play a critical role in supporting Amtrak’s business plans and ultimately in maintaining Amtrak’s viability. Such investment will not only help Amtrak retain revenues by improving the quality of service but will also be important in facilitating the revenue growth predicted in the business plans. Although Amtrak stands to receive historic levels of federal capital funds in the next few years, it is not likely that sufficient funds will be available to meet Amtrak’s identified capital investment needs. Amtrak’s September 1997 strategic business plan identified about $5.5 billion in capital investment needs from fiscal years 1998 through 2003. This amount includes such items as completing the high-speed rail program between New York and Boston (about $1.7 billion), making infrastructure-related investments (about $900 million), and overhauling existing equipment (about $500 million). However, federal funding from the Taxpayer Relief Act, the fiscal year 1998 capital appropriation, and the President’s proposed fiscal year 1999 budget—along with about $800 million that Amtrak anticipates receiving from state, local, and private financing—would provide about $5.0 billion, or about $500 million short of the $5.5 billion that it states that it needs for capital funding. Amtrak plans to use a substantial amount of these federal funds for maintenance expenses, such as preventative maintenance, rather than for high-yield capital investments. The use of these available federal funds for maintenance expenses could have long-term financial impacts on Amtrak. In particular, such use would reduce the amount of money available to Amtrak to acquire new equipment and/or acquire those capital improvements necessary to reduce costs and/or increase revenues. In this regard, the President’s proposed fiscal year 1999 budget would allow Amtrak to use capital grant funds for maintenance purposes, such as overhauling rail rolling stock and providing preventative maintenance. The administration believes such flexibility would allow Amtrak to manage its capital grant appropriation more efficiently and make clearer trade-offs between maintenance and capital investment costs. Amtrak’s March 1998 revised strategic business plan indicates that it plans to use $511 million (82 percent) of the $621 million in capital grant funds proposed in the President’s fiscal year 1999 budget for maintenance expenses. In total, Amtrak plans to use $1.8 billion (65 percent) of $2.8 billion in capital grants under the President’s budget proposal to pay maintenance expenses from fiscal years 1999 through 2003. In addition, Amtrak plans to temporarily use some of the Taxpayer Relief Act funds for the allowed maintenance of the existing equipment used in intercity passenger rail service. To help stay within its credit limits,Amtrak plans to temporarily use $100 million in Taxpayer Relief Act funds for a portion of allowed maintenance expenses in fiscal year 1998, according to Amtrak’s March 1998 revised strategic business plan. Amtrak’s use of a portion of its federal capital grant for maintenance expenses, as is currently allowed for transit, is expected to enable it to repay this $100 million. Amtrak also plans to temporarily use $317 million and $200 million in Taxpayer Relief Act funds in 1999 and 2000, respectively, for a portion of allowed maintenance expenses. In this way, Amtrak expects to reduce its cash flow deficits to $100 million in each of those years. Amtrak officials told us that the Taxpayer Relief Act funds, including these repayments, will ultimately be used for investments that have a high rate-of-return and that are highly leveraged. According to Amtrak, temporarily using a portion of Taxpayer Relief Act funds for allowed equipment maintenance will help the corporation avoid additional borrowing from its credit lines over the original planned amount. Amtrak believes using Taxpayer Relief Act funds for this purpose will help keep it below its maximum short-term credit limit. Amtrak officials told us that using a portion of the federally appropriated capital grant funds for maintenance will provide stability for Amtrak over the next several years, thus averting a possible bankruptcy. This stability will provide Amtrak with some breathing room to (1) determine how to address the capital shortfall and (2) complete a market analysis that would result in several alternatives for a national intercity passenger rail network. The Amtrak Reform and Accountability Act was also designed to address Amtrak’s poor financial condition by making certain reforms to Amtrak’s operations to help Amtrak better control and manage its costs. For example, the act eliminates, as of May 31, 1998, existing labor protection arrangements for employees who lose their jobs as the result of a discontinuation of service (currently eligible employees may be entitled to up to 6 years of compensation) and requires Amtrak and its unions to negotiate new arrangements; repeals the statutory ban on contracting out work (except food and beverage service, which can already be contracted out) and makes contracting out subject to negotiations by November 1999; and places a $200 million cap on the amount of liability claims (including punitive damages) that can be paid as the result of an Amtrak accident. The reforms contained in this act may have little, if any, immediate effect on Amtrak’s financial performance for several reasons. First, Amtrak officials pointed out that no route closures are currently planned. Therefore, no new labor protection costs are expected to be incurred. Amtrak officials also noted that the existing labor protection arrangements for employees affected by route closures have primarily resulted in payments of wage differentials because many eligible employees were transferred to lower-paying jobs. According to Amtrak, in the past 5 years, only 5 employees have received severance pay and 11 employees are currently in arbitration over this issue. Second, the ban on contracting out work need not be negotiated until November 1, 1999. Amtrak officials believe that while the repeal of the ban may provide long-term flexibility, including flexibility in union negotiations and in controlling costs, the repeal is not likely to have much effect before November 1999. Finally, Amtrak believes the $200 million limit on liability claims may have limited financial effect because this cap is significantly higher than amounts Amtrak has historically paid on liability claims. Amtrak and FRA officials believe the benefits of these reforms are unclear at this time. These reforms may not result in measurable financial savings as much as in additional flexibility in negotiating with labor unions and in addressing the freight railroads’ concerns over such issues as liability payments. The act also made other changes that have the potential for a significant impact on Amtrak’s future. First, the act replaced the current board of directors with a “Reform Board.” Second, it established an independent commission—the Amtrak Reform Council—to evaluate Amtrak’s financial performance and make recommendations for cost containment, productivity improvements, and financial reforms. If at any time after December 1999 the Council finds that Amtrak is not meeting its financial goals or that Amtrak will require operating funds after December 2002, then the Council is to submit to the Congress, within 90 days, an action plan for a restructured national intercity passenger rail system. In addition, under such circumstances, Amtrak is required to develop and submit an action plan for the complete liquidation of the railroad. Mr. Chairman, in 1995, we concluded that the Congress needed to decide on the nation’s expectations for intercity passenger rail service and the scope of Amtrak’s mission in providing that service. These decisions require defining a national route network, determining the extent to which the federal government would contribute funds, and deciding on the way any remaining deficits would be covered. In 1997, we concluded that, as currently constituted, Amtrak will need substantial federal operating and capital support well into the future. Whether Amtrak will be able to improve its financial position in the near term is doubtful. If not, the Congress will be asked to continue to provide substantial sums of money each year to support Amtrak. If the Congress is not willing to provide such levels of funds, then Amtrak’s future could be radically different, or Amtrak may not exist at all. We believe that this is the right time for Amtrak’s new Reform Board to work with the Congress to consider and act on the issues that will chart Amtrak’s future. Mr. Chairman, this concludes my testimony. I would be happy to respond to any questions that you or Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed: (1) Amtrak's financial performance during fiscal year (FY) 1997 and during the first quarter of FY 1998; (2) challenges Amtrak will face in improving its financial health; and (3) the potential impact that recently enacted legislation may have on Amtrak's financial condition. GAO noted that: (1) Amtrak's financial condition continues to deteriorate; (2) although Amtrak has been able to reduce its net losses (total expenses less total revenues) from about $892 million in FY 1994 to about $762 million in FY 1997, the 1997 loss would have been $63 million higher were it not for one-time increases in revenue from the sales of real estate and access rights for telecommunications; (3) in March 1998, Amtrak projected that its net loss for FY 1998 could be about $845 million--about $56 million more than planned; (4) Amtrak will continue to face challenges in improving its financial health; (5) Amtrak hopes to improve its financial health by increasing revenues through such actions as expanding mail and express service (delivery of higher-value, time-sensitive goods) and instituting high-speed rail service between New York City and Boston; (6) however, Amtrak has had to substantially scale back its net revenue projections for express business, and positive net income from the high-speed rail program will not occur for another 2 years; (7) Amtrak does not currently plan to reduce routes, even though one of its routes--the Metroliner service between Washington, D.C., and New York City--makes money; (8) instead it plans to fine-tune its route network and conduct a comprehensive market analysis; (9) federal funding and recently enacted reforms will not solve Amtrak's financial problems; (10) although the Taxpayer Relief Act of 1997, FY 1998 capital appropriations, and the President's proposed FY 1999 budget, if enacted, will provide Amtrak with historic levels of capital support, this support will fall short of Amtrak's identified capital needs by about $500 million; (11) in addition, Amtrak plans to use $1.8 billion of the $2.8 billion in requested federal capital grant funds to pay maintenance expenses between FY 1999 and FY 2003; (12) the use of funds for this purpose would substantially reduce the remaining level of funds available to acquire new equipment or make the capital improvements necessary to reduce Amtrak's cost and/or increase revenues; (13) therefore, such use will have a negative impact over the long term; and (14) furthermore, the Amtrak Reform and Accountability Act of 1997 significantly changed Amtrak's operations, but these reforms will provide few, if any, immediate financial benefits. |
In June 2007, the Army began establishing WTUs at United States military installations with MTFs that were providing medical care to 35 or more eligible servicemembers. As of January 2009, the Army was operating 33 of these WTUs. (See fig. 1.) The Army has also established WTUs at locations in Germany—Bavaria, Heidelberg, and Landstuhl. For servicemembers with less complex medical needs, the Army uses its existing network of community-based health care organizations, which it now refers to as community-based WTUs. The community-based WTUs allow servicemembers to live at home and receive medical care while remaining on active duty. A servicemember was eligible for placement in a WTU if he or she required more than 6 months of medical treatment and required complex case management. Army guidance specifies that the mission of servicemembers assigned to a WTU is to heal and transition—return to duty or to civilian life—and while WTU servicemembers may have work assignments in the unit, this work may not take precedent over the servicemembers’ treatment. WTUs have a defined staffing structure that includes leadership positions of commanders and platoon sergeants, as well as administrative staff, such as human resources and financial management specialists. Within each unit, the servicemember is assigned to a team of three key staff—the Triad of Care—who provide case management services to ensure continuity of care. (See fig. 2.) Servicemembers in the WTUs vary by the type of medical condition for which they are receiving care and include Army active component, Reserve, and National Guard servicemembers. Active component servicemembers comprise about two-thirds of the WTU population, and active duty Reserve and National Guard servicemembers collectively comprise about one-third. As of December 1, 2008, about 60 percent of servicemembers in WTUs had been wounded in combat or had incurred a noncombat injury or illness during OEF or OIF, which may have resulted in burns, amputations, or other types of conditions. The remaining servicemembers in the units included those who may have been referred to the WTU for completion of the disability evaluation process; those who incurred a noncombat injury, such as during a training exercise; and those who incurred a noncombat illness such as cancer that required complex case management. The Army has issued additional WTU policies aimed at reducing staffing shortfalls, modifying the staffing model, and revising servicemember entry and exit criteria. To reduce staffing shortfalls, the Army issued policies designed to ensure that WTUs achieve and maintain staffing at required staff-to-servicemember ratios. The Army also implemented a revised WTU staffing model that includes new staff-to-servicemember ratios for two of the three Triad of Care positions. In addition, the Army issued policies to revise its criteria for servicemembers entering and leaving WTUs—a policy that affects population size and staffing needs. Although the Army had increased the number of staff being assigned to the WTUs, staffing shortfalls continued through June 2008. When we last reported on the Army’s progress in staffing the WTUs in February 2008, the Army had established a goal of having at least 90 percent of Triad of Care staff positions filled to meet the staff-to-servicemember ratios that it had established for its WTUs. These ratios were 1:200 for primary care managers; 1:18 for nurse case managers at Army medical centers that normally see servicemembers with more acute conditions and 1:36 for other types of Army medical treatment facilities; and 1:12 for squad leaders. At that time, the Army had 1,141 Triad of Care staff for its WTUs, and 11 WTUs had less than 90 percent of needed staff for one or more Triad of Care positions—representing a total shortfall of 64 staff. As of June 25, 2008, WTU Triad of Care staff had increased to 1,328, but because the size of the WTU servicemember population continued to grow and increase staffing needs, 21 WTUs were not meeting this goal and had staffing shortfalls in 108 Triad of Care positions. However, it is important to note that WTU staffing shortfalls represent a specific point in time. WTU staffing needs may vary daily based on both the number of servicemembers entering and exiting the WTUs and with fluctuations in the number of Triad of Care staff, who may deploy or otherwise be reassigned or leave. To address challenges in fully staffing the WTUs, including Triad of Care positions, the Army issued new policies in July 2008 for staffing the WTUs. The Army’s new policies included a requirement that local leadership— WTU commanders, MTF commanders, and senior installation commanders—fill 100 percent of WTU staff shortages, including those related to the Triad of Care, by July 14, 2008. For example, commanders were directed to fill the positions using personnel present on the installation, such as physicians and nurses who work in the MTFs, and to ensure continued 100 percent staffing from July 14, 2008, forward. As of August 2008, after the implementation of these new staffing policies, Army data indicated that Triad of Care staffing shortfalls had been reduced considerably, and the Army had generally met its goal of 100 percent staffing of its WTUs, with only a few exceptions. As of August 25, 2008, four WTUs had staffing shortfalls in four Triad of Care positions total—Balboa was missing one nurse case manager, Fort Belvoir was missing one squad leader, and Fort Drum and Fort Irwin were each missing one primary care manager. On October 16, 2008, the Army implemented revisions to its WTU staffing model, including changes to two of its Triad of Care staff-to- servicemember ratios. (See fig. 3.) These policy changes were based on a study initiated by the Army in February 2008 that found that some of the existing staff-to-servicemember ratios were not adequate for providing an appropriate level of care to servicemembers in WTUs. The study team recommended changes to the Triad of Care staffing ratios for nurse case managers and squad leaders. The team also recommended realigning existing medical and administrative support staff in the WTU to provide direct assistance to the nurse case manager and hiring new staff to support the primary care manager. The Army applied the revised ratios to all the WTUs except Walter Reed Army Medical Center. Army officials told us that the study team excluded Walter Reed from its review because the population receiving care at Walter Reed has more complex medical needs than the population at other WTUs. As a result, Walter Reed is continuing to operate under its original staff-to-servicemember ratios—1:200 for primary care managers, 1:18 for nurse case managers, and 1:12 for squad leaders. Despite the servicemember population at Walter Reed having more complex medical needs, these ratios are not much different than the revised ratios established for other WTUs. According to WTU officials from Walter Reed, Triad of Care staff who work with servicemembers with more complex medical needs generally require higher staff-to-servicemember ratios, but an assessment of acuity—the complexity of servicemembers’ needs—is necessary for determining the exact ratios that would be appropriate for Triad of Care positions at this location. Army officials told us that the Army currently does not have a plan for conducting a study of Walter Reed’s staffing model because this facility is scheduled to close in 2011 under Base Realignment and Closure 2005. According to Army officials, the WTU at Walter Reed will be moved to the newly established Walter Reed National Military Medical Center in Bethesda, Maryland. The WTU servicemember population from Walter Reed will be dispersed among the WTU at the new medical center and the WTUs at Fort Belvoir and Fort Meade. Nonetheless, the Army had made considerable progress in meeting the new WTU staff-to-servicemember ratios for the Triad of Care positions. On January 12, 2009, 4 of the 32 WTUs in the United States (excluding Walter Reed Army Medical Center) had a total shortfall of seven Triad of Care positions—three primary care managers and four squad leaders. Walter Reed, which continued to operate under its original Triad of Care staff-to- servicemember ratios, did not have any shortfalls. In July 2008, the Army also implemented policies revising WTU servicemember entry and exit criteria to increase emphasis on servicemembers needing complex case management. The revised policies stated that feedback from WTU officials, MTF commanders, and other senior officials indicated that many servicemembers in WTUs did not need the complex case management that the units provided. For example, officials from one WTU we visited told us that the WTUs included servicemembers who had conditions that were not complex, such as a broken leg, or who were waiting to finish the Army’s disability evaluation process and no longer had medical appointments. Army officials indicated that the growth of the WTU population—partially due to the inclusion of servicemembers who did not need complex case management—had impeded its ability to achieve and maintain staff for its Triad of Care positions in accordance with its staff-to-servicemember ratios. The Army’s July 2008 policies modified WTU entry and exit criteria specifically for active component servicemembers. These revised criteria do not apply to Reserve and National Guard servicemembers, who comprise about one-third of the WTU population. Army policy indicates that Reserve and National Guard servicemembers are generally eligible for placement in a WTU if they need health care for conditions identified, incurred, or aggravated while on active duty, and they will remain in the WTU until their medical condition is resolved and they are eligible to be released from active duty or they complete the Army’s disability evaluation process. According to an Army official, the Army is also exploring ways to apply the revised entry and exit criteria to Reserve and National Guard servicemembers and is planning to issue a corresponding policy in March 2009. The Army’s revised WTU entry criteria for active component servicemembers are intended to help ensure that only those who need complex case management are placed in the WTU. For example, according to the original criteria, a servicemember was eligible for placement in a WTU if he or she had complex medical needs requiring more than 6 months of treatment and did not include an assessment of the servicemember’s ability to perform his or her duties. The revised criteria state that an active component servicemember is eligible for placement in a WTU if he or she has complex medical conditions that require case management and will not be able to train for or contribute to the mission of a unit for more than 6 months. The WTU exit criteria, which had not been explicitly articulated in the original WTU policy, now allow local leadership greater flexibility in reassigning active component servicemembers to other units on the installation. Previously, an active component servicemember would remain in a WTU until he or she was able to return to duty and completed his or her medical treatment or was discharged from the Army, even if the servicemember’s medical care could be managed outside a WTU. The exit criteria state that an active component servicemember who is expected to return to duty may be reassigned to a unit on the installation before being found medically fit to return to duty if certain conditions are met. In particular, the servicemember may be reassigned if the servicemember’s remaining medical needs can be managed outside a WTU and if the servicemember’s reassignment has been approved by the Triad of Care and by leadership of the WTU, MTF, and installation. Along with its policies establishing the revised entry and exit criteria, the Army required the Warrior Care and Transition Office to assess the effectiveness of the revised entry and exit criteria in ensuring that only those servicemembers needing complex case management are in the WTUs and to monitor the effects of the revised criteria. Specifically, the Warrior Care and Transition Office was tasked with developing measures for assessing the criteria’s effectiveness. According to Army officials, the Warrior Care and Transition Office has not developed any additional measures to determine the effectiveness of the revised entry and exit criteria, but instead is relying on existing measures. For example, the number of servicemembers in WTUs decreased after implementation of the criteria, as the Army anticipated. Specifically, Army data show that the active component population of the WTUs has declined each month since the new entry and exit criteria went into effect, from about 8,400 in July 2008 to about 6,900 in November 2008. Army officials also said that length of stay can be used to assess the entry and exit criteria because servicemembers requiring complex care would be expected to have longer lengths of stay in the WTU. The policy with the revised entry and exit criteria also includes a provision for the Army Inspector General to assess the criteria as part of a broader provision to conduct a follow-up inspection of the Army’s disability evaluation process and WTUs. An official within the Army’s Office of the Inspector General told us that this inspection is included in its proposed long-range inspection plan for fiscal years 2009 and 2010, which is pending approval by the Secretary of the Army. To monitor the recovery process of WTU servicemembers, the Army uses individual transition plans and various upward feedback mechanisms, but its feedback mechanisms may not provide complete information on the performance of WTUs. The Army’s feedback mechanisms, which include a telephone hotline and a satisfaction survey, provide a way for servicemembers and their families to raise concerns about WTU-related issues. However, while this may provide helpful and important information to Army leadership, the concerns raised through these mechanisms are not necessarily representative of the concerns of all WTU servicemembers and their families. To facilitate servicemembers’ recovery, the Army has developed a process for coordinating and monitoring the care that servicemembers receive while in a WTU. In January 2008, the Army issued a policy establishing Comprehensive Transition Plans for WTU servicemembers. A plan includes a servicemember’s medical conditions and vocational training needs, as well as his or her expectations and goals for the recovery process. The Army requires that a servicemember’s transition plan be developed within 30 days of his or her placement into the WTU by WTU leadership and Triad of Care staff with input from the servicemember and his or her family. The WTU and MTF commanders are responsible for ensuring that the transition plan is developed. Army policy requires that the Triad of Care monitor the servicemember’s transition plan weekly. For example, officials told us that meetings, which may include staff in addition to the Triad of Care, are held to determine whether the goals documented in the servicemember’s transition plan are being met and to modify the plan as necessary. Additionally, according to an Army official, conducting periodic formal evaluations of the transition plan is required to determine whether the servicemember should (1) return to duty, (2) continue rehabilitation, or (3) be referred to the Army disability evaluation process. An official said that these formal evaluations occur at least every 3 months, but can occur more often based on the servicemember’s transition plan. In addition to actions already underway, the Army is developing additional policy to assist WTUs in developing the Comprehensive Transition Plans, which could help ensure that the plans are implemented consistently across WTUs and that the transition needs of all servicemembers in the WTUs are regularly assessed. According to the Army, this additional policy will include guidance on setting goals with servicemembers and their families. It will also include performance measures that will allow the Army to more systematically monitor the extent to which WTUs have developed transition plans for its servicemembers. For example, according to the Army, the performance measures will include the number of servicemembers in WTUs for more than 30 days who do not have a transition plan. The policy will require that the performance measures be reported at least monthly. During a 6-month period over the course of our review, Army officials had provided us with various dates for which they had expected that this policy would be finalized, but this had not yet occurred as of February 27, 2009. Related to one of these performance measures, the Army has begun reporting data on the number of servicemembers in WTUs for more than 30 days who had a transition plan. Our analysis of these data shows that as of January 6, 2009, 94 percent of all servicemembers in WTUs across the United States had transition plans. Specifically, between 84 and 100 percent of servicemembers at 32 of 33 WTUs had transition plans. At the remaining WTU, 73 percent of servicemembers had transition plans. Officials from this WTU said that, because of the rapid growth in the WTU servicemember population, there were insufficient staff in some positions involved in developing the transition plan, such as social workers. As a result, officials were first developing transition plans for servicemembers who had the greatest need. Additionally, officials said that some servicemembers did not need transition plans because they were in the process of leaving the WTU. Using various upward feedback mechanisms, the Army has obtained information about different aspects of its WTUs, including the Triad of Care. (See table 1.) For example, the Army requires each of its WTUs to hold monthly Town Hall meetings to serve as a forum for WTU servicemembers and their family members to voice their concerns directly to WTU and installation leadership. Additionally, after the media reported deficiencies at Walter Reed Army Medical Center, the Army established two other feedback mechanisms—the Wounded Soldier and Family Hotline and the Ombudsman Program—which are also available to servicemembers receiving care at the MTF who are not part of the WTU and their families. Through both of these mechanisms, Army personnel are available to address servicemembers’ concerns about medical and nonmedical issues, including transportation, financial, legal, and housing concerns. The Army collects and analyzes data from these feedback mechanisms to identify trends and potential problem areas. While this may provide helpful and important information to Army leadership about the performance of the WTUs, the concerns raised through these mechanisms are not necessarily representative of all concerns of WTU servicemembers and their families because they are dependent upon the initiative taken by individuals and because they may include concerns from servicemembers not in WTUs. In addition, the Army obtains feedback on WTUs through its Warrior Transition Unit Program Satisfaction Survey, which solicits feedback on the performance of WTUs, including the WTUs in Germany and the community-based WTUs. This survey is designed to assess servicemembers’ satisfaction with various aspects of WTUs, including the primary care manager and nurse case manager. The Army began administering this survey in June 2007 to servicemembers who had been placed in WTUs. The Army mails the survey to WTU servicemembers on the 30-, 120-, 280-, and 410-day anniversaries of their placement into the WTU. In February 2008, the Army began following up by telephone with servicemembers who did not respond 30 days after the surveys were mailed. Although the Army has used this survey to report relatively high satisfaction rates among WTU servicemembers, including servicemembers at WTUs in Germany and community-based WTUs, the survey results may not be representative of all WTU servicemembers. During the period July 2007 through September 2008, the Army’s data showed that for WTUs at military installations, the percentage of servicemembers satisfied ranged between approximately 60 and 80 percent, and for the community-based WTUs, between approximately 80 and 90 percent. However, the overall monthly response rates for WTU respondents ranged between 13 and 35 percent for the period June 2007 through September 2008, which was the most current data available at the time of our review. Such a low response rate decreases the likelihood that the survey results accurately reflect the views and characteristics of the target population. Despite low response rates, the Army has not conducted additional analyses to determine whether its survey results are representative of the entire WTU servicemember population. According to Office of Management and Budget guidelines, best practices to ensure that survey results are representative of the target population include conducting a nonresponse analysis for surveys with a response rate lower than 80 percent. Although the Army was not required to seek the Office of Management and Budget’s approval for the Warrior Transition Unit Program Satisfaction Survey, these are generally accepted best practices and are relevant for the purposes of assessing whether the survey results are representative of all WTU servicemembers. A nonresponse analysis may be completed on more than one occasion, depending on how frequently the survey is administered. A nonresponse analysis can be used to determine if the responses from nonresponding servicemembers would be the same as the responses from responding servicemembers. Therefore, this analysis could help the Army determine whether its WTU satisfaction survey results are representative of all WTU servicemembers. An Army official told us that the Army does not plan to conduct nonresponse analyses because it is satisfied with the response rates that it has been receiving since it began following up with servicemembers by telephone in February 2008. For the period February 2008 through September 2008, WTU response rates for both mail and telephone respondents, including WTUs in Germany and community-based WTUs, have ranged between 26 and 35 percent. In addition, this official told us that beginning in Spring 2009 the Army no longer plans to conduct this survey by mail, but will conduct this survey solely by telephone, and expects response rates to further increase once this occurs. Nonetheless, the Army has used its survey results to monitor trends and identify areas for improvement. For example, the Army conducted additional analyses of nine WTUs, which are among the largest WTUs. For one of these WTUs, the Army reported that additional analyses indicated that factors contributing to low satisfaction included decreased satisfaction about pain control and financial issues. The analyses also showed that servicemembers in this WTU for more than 280 days were the most dissatisfied. While Army leadership may use the Warrior Transition Unit Program Satisfaction Survey results to identify areas for improvement, Army officials at some locations we visited said that low response rates and lack of specific information limits the usefulness of the survey at the local level. Consequently, some WTUs have undertaken local efforts to collect information about servicemembers’ satisfaction. Army officials at three of the WTUs we visited told us that they have independently conducted local satisfaction surveys to obtain specific information from their servicemembers. These local efforts have focused on gauging satisfaction in several areas, including, for example, satisfaction with nurse case managers, primary care managers, and squad leaders. The local surveys do not replace the Army-wide satisfaction survey, and Army officials reported that they have been able to use them to improve services at individual WTUs. For example, at one location we visited, officials administered a satisfaction survey in January 2008 and August 2008 that focused on the nurse case managers. These results showed that, while servicemembers were generally satisfied with their nurse case managers, a few servicemembers commented that their nurse case manager’s caseload was too large. In response to the survey results, the WTU has worked to balance the caseload among the nurse case managers so that no case manager has an excessive number of WTU servicemembers. After problems at Walter Reed Army Medical Center were disclosed in early 2007, the Army dedicated significant resources and attention to improving outpatient care for servicemembers through the establishment of the WTUs. Initially, the Army faced challenges fully staffing the units to serve an increasing population, but revisions to WTU policies substantially reduced staffing shortfalls and appeared to manage population growth for active component servicemembers. As of January 2009, almost all of the Triad of Care positions in the WTUs were fully staffed. In addition, the number of active component servicemembers in WTUs decreased within the first 4 months of implementing the revised entry and exit criteria. Sustained attention to staffing levels and the implementation of the revised WTU entry and exit criteria will be important for maintaining these gains and helping to ensure that servicemembers are getting the care that they need. The Army demonstrated its dedication to caring for its WTU servicemembers by studying and revising its staffing model, including staff-to-servicemember ratios for selected positions, to help ensure the WTUs were providing an appropriate level of care. However, a lingering concern—in light of the study’s findings not applying to the WTU at Walter Reed Army Medical Center—is that the Army does not have a plan to conduct a similar study for this WTU. The population receiving care at Walter Reed has more complex health care needs than the population at other WTUs, and might also require, for example, higher staff-to- servicemember ratios. Without an assessment of the current staffing model that considers this complexity, the Army cannot be assured that it is providing an appropriate level of care to servicemembers at Walter Reed. This evaluation could help the Army determine the appropriate staffing model for the population at Walter Reed and ensure that previously reported problems with coordination of care and treatment for this population do not recur. Furthermore, an assessment of Walter Reed’s staffing model could help the Army make staffing decisions in preparation for the transfer of seriously injured servicemembers to other facilities once Walter Reed closes in 2011. Continued monitoring of the Army’s WTUs, including servicemembers’ recovery process, will be important for ensuring that these units are meeting servicemembers’ needs. The Army’s Comprehensive Transition Plans appear to be a significant step towards ensuring that servicemembers are receiving the care they need by regularly assessing their progress. However, the Army has not finalized policy that would allow it to systematically determine whether WTUs are consistently developing these plans. The Army has also established various upward feedback mechanisms that help inform Army leadership about issues WTU servicemembers are facing, but they do not provide information on the overall effectiveness of the WTUs. The Army’s Warrior Transition Unit Program Satisfaction Survey could potentially be used to collect information representative of the WTU population. However, the survey has had low response rates, and the Army has not performed additional analysis to determine whether these results are representative of all WTU servicemembers. Although the Army’s plan to conduct the satisfaction survey solely by telephone may increase response rates, nonresponse analyses may still be warranted because the response rates may remain well below 80 percent—the level where generally accepted best practices call for nonresponse analyses to ensure that survey results are representative. Without representative information, the Army cannot reliably report servicemembers’ satisfaction with the WTUs, and without such data Army officials could potentially be unaware of serious deficiencies like those that were identified at Walter Reed in 2007. We recommend that the Secretary of Defense direct the Secretary of the Army to take the following three actions: To help ensure that the WTU at Walter Reed Army Medical Center is providing an appropriate level of care to servicemembers and help the Army make future staffing decisions for the WTUs that will be caring for this population once Walter Reed closes, the Army should examine Walter Reed’s WTU staffing model, including its Triad of Care staff-to- servicemember ratios, in light of the complexity of the health care needs of servicemembers placed in this WTU. To help ensure that the Comprehensive Transition Plans are implemented consistently across WTUs and that the Army has performance data for monitoring the implementation of the transition plans, the Army should expedite efforts to finalize and implement its policy for guiding the development of the Comprehensive Transition Plans. To determine whether the results of the Warrior Transition Unit Program Satisfaction Survey can be used to assess the effectiveness of the WTUs, the Army should take steps to determine whether the results are representative of all servicemembers in WTUs, such as by conducting nonresponse analyses, and should take additional steps if necessary to obtain results that are representative. In commenting on a draft of this report, DOD stated that it concurred with our findings and recommendations. (DOD’s comments are reprinted in appendix II.) However, DOD’s description of the actions that it has taken and those that it plans to take to respond to the recommendations did not fully address two of the recommendations. In response to our recommendation to examine the WTU staffing model at Walter Reed Army Medical Center, DOD indicated that the Army has multiple planning efforts and studies underway to prepare for the closing of Walter Reed. For example, it indicated that the Center for Army Analysis is determining the capacity and capabilities of Fort Meade, Fort Belvoir, and the new Walter Reed National Military Medical Center to determine how best to provide the appropriate level of care and services to these WTU servicemembers. DOD also indicated that Walter Reed has sufficient resources to provide appropriate care until the new Walter Reed is completed. Specifically, DOD commented that Walter Reed’s staffing has met or in certain areas exceeded that of other WTUs—for example, nurse case managers have dedicated supervisory assistance available to them at all times and the Walter Reed nurse case manager staff-to- servicemember ratio is 1:18, compared to 1:20 at other WTUs. In describing the Army’s efforts and studies, however, DOD did not indicate how, if at all, they would be examining the WTU staffing model at Walter Reed, including the Triad of Care staff-to-servicemember ratios. Furthermore, although Walter Reed may have additional resources and its nurse case managers may operate under a slightly higher ratio, the population receiving care at Walter Reed has more complex health care needs than the population at other WTUs. We continue to believe that without an assessment of the current staffing model that considers this complexity, the Army cannot be assured that it is providing an appropriate level of care to servicemembers at Walter Reed. Furthermore, we continue to believe that such an assessment can help the Army make future staffing decisions for the WTUs that will be caring for this WTU population once Walter Reed closes. As such, it is imperative that DOD take all actions necessary to examine the WTU staffing model at Walter Reed. With respect to our recommendation for the Army to take steps to determine whether the results of the Warrior Transition Unit Program Satisfaction Survey are representative of all servicemembers in WTUs, DOD’s response does not indicate that the Army will be taking the actions that we recommended. DOD indicated that the Army’s change to telephone surveys has greatly increased response rates and a nonresponse analysis is currently not required. However, DOD did not indicate its most recent response rates. Although DOD indicated that the Army would reevaluate the need for a nonresponse analysis by September 1, 2009, unless the change to telephone surveys has resulted in a response rate that is 80 percent or higher, we believe that taking steps to determine whether the results are representative of all servicemembers in WTUs is warranted. Without such data, we continue to believe that the Army cannot reliably report servicemembers’ satisfaction with the WTUs and that Army leadership could potentially be unaware of serious deficiencies in some of its WTUs. With regard to our recommendation for the Army to finalize and implement its policy for guiding the development of Comprehensive Transition Plans, DOD responded that the policy was signed on March 10, 2009. DOD also indicated that staff associated with the Army’s Organizational Inspection Program are assisting with the implementation of the plans and will validate compliance with the new policy. We are sending copies of this report to the Secretary of Defense, relevant congressional committees, and other interested parties. The report also is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Overall, to evaluate the Army’s efforts to staff and monitor its Warrior Transition Units (WTU), we obtained documentation from and interviewed officials with the Army’s Office of the Surgeon General, Medical Command, Warrior Care and Transition Office, Manpower Analysis Agency, and Office of the Inspector General. To gain an understanding of staffing and monitoring activities at individual WTUs, we visited five WTU locations—Forts Benning and Gordon (Georgia), Fort Lewis (Washington), Fort Sam Houston (Texas), and Walter Reed Army Medical Center (Washington, D.C.). We selected these locations because they represent different Army regional Medical Commands and they vary in the number of servicemembers placed in the WTU. Because we did not visit a representative sample of WTUs, the results from these visits cannot be generalized to other WTUs. At each location, we met with WTU command staff, nurse case managers or primary care managers, and servicemembers placed in the WTU to gain their perspectives on case management services being provided through the WTU. We also met with officials representing the Army’s regional Medical Command to discuss case management services, including staffing and monitoring. Lastly, we met with officials representing the Case Management Society of America to obtain their perspectives on the Army’s WTUs and efforts to monitor healthcare provided to servicemembers. More specifically, to assess the Army’s ongoing efforts to staff its WTU Triad of Care positions—primary care managers, nurse case managers, and squad leaders—we obtained and reviewed the Army Warrior Care & Transition Program, which established policies for implementing the WTUs. We also reviewed additional staffing policies that the Army established in July 2008. These policies included additional requirements for staffing the WTUs and a new WTU staffing model that included revised WTU staff-to-servicemember ratios. To determine the extent to which the Army was meeting its staff-to-servicemember ratios for its Triad of Care positions, we analyzed Army staffing and servicemember population data for the 33 WTUs that were established at MTFs located at Army installations within the United States. We did not verify the accuracy of these data. We did, however, speak with Army officials regarding the reliability of the data and determined them to be sufficiently reliable for the purposes of our review. We also did not evaluate the appropriateness of the Triad of Care ratios for meeting the staffing needs of the WTUs. To determine how the Army is monitoring the recovery process of servicemembers in WTUs, we reviewed the Army’s policy and guidance regarding the implementation of its Comprehensive Transition Plans. We also spoke with an Army official about a draft policy related to the documentation of the transition plans that would include performance measures to track compliance. To determine the extent to which the 33 WTUs within the United States had plans for individual servicemembers, we analyzed the Army’s biweekly data on the number of servicemembers who had been in the WTU for at least 30 days who had a transition plan. We did not verify the accuracy of these data. We did, however, speak with an Army official regarding the reliability of the data and determined them to be sufficiently reliable for the purposes of our review. We also reviewed protocols and procedures for selected upward feedback mechanisms. The Army uses a number of mechanisms for obtaining feedback from servicemembers and their families to address WTU-related issues, but we did not review every mechanism. We focused on the Town Hall Meeting, Wounded Soldier and Family Hotline, the Ombudsman Program, and the Warrior Transition Unit Program Satisfaction Survey. We focused on these mechanisms because they were implemented shortly after the media reported deficiencies at Walter Reed Army Medical Center and because they provide WTU servicemembers and their families with methods for sharing their experiences and concerns about health care and case management with Army leadership. For the Army’s Warrior Transition Unit Program Satisfaction Survey, which is used to assess servicemembers’ satisfaction across all WTUs, we reviewed the survey questionnaire, protocol, and results for the period July 2007 through September 2008, which were the most recent data available at the time of our review. We reviewed and analyzed Army data on the number of surveys mailed monthly and corresponding response rates for all of the WTUs, including the overseas and community-based WTUs. We assessed the reliability of these data by reviewing related documentation and speaking with knowledgeable agency officials and determined the data to be sufficiently reliable for our purposes. We also reviewed the Office of Management and Budget Standards and Guidelines for Statistical Surveys (September 2006) to identify standards for statistical surveys conducted by federal agencies, including best practices for ensuring that survey results are representative of the target population. Although the Army is not required to seek Office of Management and Budget approval to conduct its satisfaction survey, these guidelines are relevant for assessing whether survey results are representative. Lastly, three WTUs we visited administered local surveys and we obtained and reviewed their survey questionnaires and corresponding results, when available. However, we did not review the survey methodology for those WTUs that administered a local survey. Further, because these local surveys collected data that were specific to these WTUs, the survey results cannot be generalized to all WTUs. We conducted this performance audit from June 2007 to April 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Bonnie Anderson, Assistant Director; Janina Austin; Susannah Bloch; Christopher Langford; Lisa Motley; Jessica C. Smith; C. Jenna Sondhelm; and Suzanne Worth made major contributions to this report. | In February 2007, a series of Washington Post articles disclosed problems at Walter Reed Army Medical Center, particularly with the management of servicemembers receiving outpatient care. In response, the Army established Warrior Transition Units (WTU) for servicemembers requiring complex case management. Each servicemember in a WTU is assigned to a Triad of Care--a primary care manager, a nurse case manager, and a squad leader--who provide case management services to ensure continuity of care. The Army established staff-to-servicemember ratios for each Triad of Care position. This report examines (1) the Army's ongoing efforts to staff WTU Triad of Care positions and (2) how the Army monitors the recovery process of WTU servicemembers. GAO reviewed WTU policies, analyzed Army staffing and monitoring data, interviewed Army officials, and visited five selected WTUs. The Army has taken several steps to help ensure that WTUs are staffed appropriately. First, the Army developed policies aimed at reducing WTU staffing shortfalls, including a policy requiring the reassignment of other personnel on an installation to fill open WTU positions. Second, in October 2008, the Army revised its WTU staffing model, including the staff-to-servicemember ratios for two of its Triad of Care positions, because an Army study determined that the existing ratios were not adequate to provide an appropriate level of care to servicemembers in WTUs. The Army has made considerable progress in meeting the new ratios, and as of January 2009, the Triad of Care positions at most WTUs were fully staffed. However, staffing ratios for the WTU at Walter Reed Army Medical Center were not revised, even though the Army recognizes that servicemembers treated at this facility have more complex health care needs than servicemembers at other WTUs. Walter Reed might require a different staffing model, for example, one that decreases the number of servicemembers assigned to staff members, but the Army does not plan to conduct an assessment of Walter Reed's staffing model. Third, the Army modified its WTU placement and exit criteria for full-time servicemembers, excluding Army Reserve and National Guard servicemembers who comprise about one-third of the WTU population. These changes are intended to help ensure that only those who need complex case management are in WTUs. Those with less serious health care needs can be reassigned to other units on the installation to continue their recovery. As the Army expected, the WTU population of full-time servicemembers declined by about 1,500 in the 4 months after implementation of the new criteria. To monitor the recovery process of WTU servicemembers, the Army has implemented transition plans for individual servicemembers as well as various upward feedback mechanisms to identify concerns and gauge satisfaction. In January 2008, the Army issued a policy establishing Comprehensive Transition Plans, which can be used to monitor and coordinate servicemembers' care. To help ensure consistent implementation of these plans among its WTUs, the Army is developing a new policy that includes the systematic collection of performance measures across WTUs. However, despite Army officials' repeated assurances to GAO that this policy was forthcoming, it had not been finalized as of February 27, 2009. The Army's feedback mechanisms include its Warrior Transition Unit Program Satisfaction Survey, which collects information from servicemembers in WTUs on a number of issues, including the primary care manager and nurse case manager. However, the survey's response rates for the WTUs have been low (13 to 35 percent) and the Army has not determined whether the results obtained from the respondents are representative of all WTU servicemembers. An Army official told GAO that the Army does not plan to conduct analyses to determine whether the survey results are representative, because it is satisfied with the response rates. In GAO's view, the response rates are too low for the Army to reliably report satisfaction of servicemembers in WTUs. |
An ASC patient may acquire an HAI from bacteria or viruses contaminating, for example, the hands of a health care worker or a needle or tube used to deliver medicine, fluids, or blood. These bacteria or viruses may include those responsible for such illnesses as staphylococcus infections and hepatitis. Two agencies in HHS have activities under way to prevent, control, or monitor HAIs. CDC—a key HHS agency for research and programs designed to prevent HAIs—has issued 13 guidelines relevant to infection control and prevention in health care settings. In these guidelines, which are based on scientific evidence, CDC recommends practices for implementation to prevent HAIs. Practices recommended to prevent or control HAIs include, for example, appropriate isolation of infected patients in health care facilities, proper sterilization of equipment, appropriate provision of antibiotics to patients before surgery, annual vaccination of health care personnel for influenza, and hand washing or the use of alcohol-based hand rubs. CMS is responsible for ensuring that ASCs that are certified as suppliers of Medicare-covered services comply with its requirements for infection control. For most ASCs, this occurs through the state-administered standard survey process conducted by state survey agencies under contract with CMS. ASCs may choose instead to undergo accreditation by a CMS-approved accrediting organization. CMS-approved accreditation programs for ASCs have standards that meet or exceed Medicare’s standards. Accrediting organizations are to conduct periodic surveys of ASCs to assess their compliance with the standards established by the accrediting organization, including those related to infection control. The state survey agency or accrediting organization assesses compliance through direct observation of activities in the facility and review of its policy documents. If an ASC opts for the CMS state-administered survey process, a state surveyor uses CMS’s survey guidance to conduct the state’s compliance review of the ASC. We identified five disparate sources of HAI data, all of which differed from one another in the types of HAI information they collected. However, none obtained its data from a nationally representative random sample of ASCs, and therefore none could be used to develop national estimates of HAI outcomes or compliance with infection control practices that affect the risk of acquiring HAIs in ASCs. Two federal data sources—CDC’s NHSN and CMS’s ASC pilot study—provided the most detailed information on HAI outcomes and infection control practices, respectively. We identified five disparate data sources that currently collect data on HAIs in ASCs. HHS operates two of these data sources—CDC’s NHSN and CMS’s ASC pilot study. Two are maintained by professional associations— the Ambulatory Surgery Center Association’s Outcomes Monitoring Project and the American Association for Accreditation of Ambulatory Surgical Facilities, Inc. (AAAASF) Internet-based Quality Assurance and Peer Review Reporting System. Finally, the state of Missouri collects data on HAIs in ASCs through its Missouri State HAI Reporting System. (See table 1.) The five data sources do not provide nationally representative information on HAIs in ASCs. In order to provide a basis for a nationwide estimate of risks of HAIs in ASCs, a data source would need to collect its data from a nationally representative random sample. None of the five data sources does so, and therefore it is not possible to generalize from their results to the nationwide population of ASCs or patients that they treat. Consequently, each of these data sources provides information only about the facilities that actually submit data to it and cannot reliably be used to describe other facilities. In terms of their coverage across ASCs, each of these data sources collects information on HAIs from a relatively small proportion of the 5,100 ASCs in the United States. The coverage ranges from the 26 ASCs that most recently reported data to the state of Missouri to about 650 ASCs reporting to the Ambulatory Surgery Center Association’s database. Moreover, which ASCs are included in each of the databases is determined by highly variable criteria. They include, depending on the database, a decision by the individual ASC to voluntarily participate, membership of the ASC in a particular professional association, and selection of an ASC based on its geographic location in a particular state. For example, all the ASCs in the ASC pilot study are from Maryland, North Carolina, or Oklahoma because those states volunteered to participate in the study, and the ASCs covered by the two professional organizations and the state source are taken from narrowly defined subsets of ASCs, that is, from member ASCs or from ASCs within a certain geographical area, respectively. The five data sources also vary in the type and level of detail of the information they collect. NHSN, AAAASF, and Missouri’s system collect data on individual patients, and the ASC pilot study and the Ambulatory Surgery Center Association’s database collect data that are aggregated to the facility level. Four of the five data sources—all but the ASC pilot study—collect information on patient outcomes, specifically rates of SSIs. However, of those four, only the federal NHSN and state of Missouri databases employ standard CDC definitions to identify cases with SSIs based on these criteria. The two professional association databases leave identification of SSIs to individual physician judgment. Both professional association databases also collect information on one or more process measures. One of these databases focuses on a practice intended to prevent SSIs—the routine use of antibiotics prior to surgery—and the other collects information on the treatment of SSIs. The ASC pilot study collects data solely on process measures. The most detailed data are provided by the two federal data sources, NHSN—the most widely recognized source of outcome data on HAIs—and the ASC pilot study. The pilot study collects data on a broad range of process measures assessing the implementation of infection control practices, such as those intended to prevent the transmission of infections through appropriate hand hygiene, injection, and sterilization procedures. A key feature of NHSN is that it collects clinically sophisticated and standardized data on HAI outcomes. Facilities that participate in NHSN, including ASCs, agree to collect and submit information on HAI outcomes, such as SSIs, according to defined protocols and standardized definitions. CDC developed detailed protocols for NHSN that specify the medical record and laboratory data needed to identify and categorize HAIs in accordance with CDC’s standardized definitions. These protocols are widely accepted by infection control professionals because they make the data in NHSN clinically relevant and comparable across the facilities submitting data to NHSN. At the same time, the data collection procedures used by NHSN can be labor intensive and technically complex for some users. For example, one expert reported that ASCs found data submission to NHSN to be time-consuming and that an ASC might opt out of the program if its demands on staff time and other resources became excessive. Although the number of ASCs currently submitting data to NHSN is unknown, it is likely to be small. NHSN has national open enrollment for multiple types of facilities. However, until September 2008 only hospitals and outpatient hemodialysis centers could enroll in NHSN. In September 2008, CDC launched a new release of NHSN that enabled freestanding ASCs that were separate from hospitals to enroll. Enrollment of ASCs may increase over time, especially if more states enact programs mandating public reporting of HAIs by ASCs using NHSN. According to a CDC official, CDC has a facility survey under way that will enable it to determine the number of ASCs that enroll in NHSN, but this official does not expect to have results available from this survey until spring 2009. Nonetheless, independent of the number of ASCs that participate in NHSN, the processes by which ASCs enroll make NHSN data nonrepresentative of ASCs nationwide. Some ASCs enroll in NHSN voluntarily, and others are required to enroll by mandate of their state government. Because NHSN uses voluntary and mandatory selection procedures, the selection of ASCs for participation in NHSN is nonrandom. This lack of random selection precludes a projection of its results to any ASCs that do not participate and generalization to the national population of ASCs. The ASC pilot study examined the potential for using CMS’s standard surveys to collect information on ASCs’ implementation of specific infection control practices. Under the pilot study, CMS modified the standard survey process by introducing two innovations—the incorporation of a CDC-developed infection control assessment tool and direct observation by the surveyor of a single patient’s care from start to finish of the patient’s stay. CDC officials also provided training for state surveyors on using the tool and developed plans to analyze the infection control data obtained with the tool. A CMS official told us that CMS would consider making changes to CMS’s standard survey process for ASCs after reviewing planned CMS and CDC analyses of the pilot study results. The surveys conducted under the pilot study collected more detailed information on practices that affect the risk of HAIs in ASCs than have previous surveys of ASCs. CMS’s current survey process requires surveyors to ascertain whether an ASC’s written policies and procedures address certain general topics pertaining to infection control. In doing so, surveyors assess the implementation of these policies and procedures and an ASC’s overall maintenance of a sanitary environment through direct observation and interviews with ASC staff. If surveyors find that either the content of those policies and procedures or their implementation by ASC staff is insufficient to meet CMS’s infection control standard, they submit a deficiency report to CMS that provides a detailed narrative describing the particular conditions or activities in the ASC that created that deficiency. In contrast, the pilot study’s infection control assessment tool focused on specific CDC-recommended infection control practices. The tool is a 12-page document that includes dozens of specific infection control practices, involving such topics as environmental cleaning, disinfection, sterilization, and injection safety. CDC researchers who developed the tool included those practices that they had found were most critical for the prevention of HAIs in the ASC setting. CMS modified the tool to indicate when responses to the tool’s questions identified a violation of the ASC health and safety standards for infection control. During the course of the pilot study, surveyors recorded on the tool itself whether or not ASC staff appropriately implemented each of those practices, based on a combination of on-site interviews and observation. For each survey in the pilot, surveyors submitted a completed tool to CMS, along with the usual statements of deficiency for those ASCs where the surveyors found inadequate compliance with the infection control or other standards. Collecting completed tools for every surveyed ASC made it possible to produce standardized quantitative data on the extent of compliance with each of the practices assessed by the tool across all ASCs surveyed for the pilot study. The tool provides detailed guidance to surveyors on how to assess the implementation of these practices. In addition, the training provided by CDC officials on how to use the tool included the principles of disease transmission to prepare the state surveyors to observe ASC practices with a “sharp eye” for serious mistakes that could lead to the transmission of HAIs. State officials from the pilot states reported positive assessments of the pilot survey process and noted that during the pilot surveyors observed unsafe practices that they would not have detected using the current survey guidance. These practices included ASC staff using single-use medication vials for multiple patients and failing to properly sterilize equipment. State officials reported that surveys conducted under the pilot study took additional time and staff resources, although specific amounts varied. In all three states, surveyors conducted a standard survey for a given ASC in addition to completing the infection control assessment tool and observing a patient’s care from start to finish. For the two states that had previously conducted standard surveys of ASCs, one found that implementing the pilot study’s two innovations required substantial additional staff resources, and the other found that, with practice, only a modest amount of additional resources was needed. CMS and CDC officials reported that they intended to separately analyze the results of the pilot study, each agency having a different focus. Specifically, a CMS official reported that CMS would analyze the effect of the pilot study’s innovations on CMS’s ability to assess the level of compliance of ASCs in the pilot states with Medicare’s health and safety standards, including the standard pertaining to infection control. In addition, from its interviews with state officials, CMS has obtained information on what techniques were effective for using the infection control assessment tool and related CDC training. CMS’s review would identify where lapses in infection control practices were found by surveyors in the pilot states and use these data to strengthen CMS’s ASC survey guidance, which CMS is currently in the process of updating. CDC officials reported that their analysis of the pilot study would focus on deriving a baseline understanding of how safely care was being delivered in ASCs, by determining the prevalence of lapses in specific infection control practices. These officials stated that CDC would use the analysis to identify “hot spots” for infection control errors for which it could target future recommendations and trainings. Neither CDC nor CMS officials have determined a timeline for the completion of their respective activities. As of October 2008, surveyors in the pilot states had finished their surveys and submitted the information they collected to CMS to be analyzed separately by CMS and CDC. Officials from both agencies estimated that their analyses of the survey results would be available in fiscal year 2009, but said they did not have any written plan or timeline for completing their analyses. A CMS official reported that agency officials planned to consider making some changes to CMS’s standard survey process for ASCs after reviewing the CMS and CDC analyses but did not intend to continue the pilot study’s data collection. This official reported that CMS was considering adopting the practice of directly observing patients from start to finish that was tested in the pilot study. This official also stated that CMS was considering whether to use the infection control assessment tool simply as a prompt for surveyors in assessing compliance with its infection control standard. The official noted that the tool provided precise guidance that had previously been lacking on specific practices that surveyors should examine in assessing compliance with the infection control standard. Under the pilot study, the assessment tool allowed surveyors to record ASC compliance with specific infection control practices in a quantifiable manner. In contrast, if the tool is used as a prompt, the surveyors would report only the instances where ASCs were found to be out of compliance with the standard as a whole, giving a narrative description of the reasons why, as they currently do under the standard survey process. CMS officials told us that they did not intend to continue using the tool to collect data, as was done in the pilot study. Even if CMS were to continue the pilot study’s data collection methods, it still would not be able to use these data to make estimates about the prevalence of safe and unsafe infection control practices in ASCs nationwide. CMS’s current policy for selecting ASCs to survey eschews random selection in favor of an approach that seeks to maximize the impact of limited survey resources, including targeting ASCs considered most likely to represent a greater risk for quality issues and selecting those that have not been surveyed within a given time interval. Specifically, in selecting ASCs for these surveys, CMS requires state survey agencies to give highest priority to ASCs that have not been surveyed in 6 years or more or that have had recent compliance problems. State survey agencies survey about half of ASCs every 3 to 4 years, but some ASCs go much longer between surveys—20 percent more than 6 years and 8 percent more than 10 years. CMS officials told us they were concerned that the level of ASC survey activity in recent years had not been sufficient to provide meaningful and current data on ASC performance across the board, including infection control issues. As a result, for fiscal year 2009 CMS increased the number of highest-priority surveys that it funded states to conduct on ASCs from 5 to 10 percent of ASCs each year. However, because this larger number of surveys does not include randomly selected ASCs, the results would still not provide information that could be generalized to ASCs nationwide. Experts we interviewed noted that the ASC environment presented challenges to the feasibility of collecting outcome data. Some of these challenges relate to the difficulties in identifying ASC patients who develop HAIs. The experts told us that patients tend to be in outpatient facilities for a relatively short time because ASC procedures generally take little time to perform. Because HAIs are not likely to develop until after a patient leaves an ASC, the opportunity to observe patients and collect HAI data is limited. The experts also told us that the opportunity to collect HAI outcome data might be further limited because rather than returning to the ASC if a complication develops following a procedure, patients often seek follow-up care from their primary care physician, a hospital emergency department, or an urgent care center. Consequently, the ASC might never know that an HAI occurred, and so would be unable to report it. Experts noted that a general lack of infection control professionals in ASCs presents a challenge to the feasibility of collecting either outcome or process data. According to the experts, ASCs rarely have a designated infection control professional, which is a health care worker trained to lead infection control efforts in a health care facility. CDC officials told us that, as with NHSN, data collection for HAIs has been historically designed for hospitals with the understanding that, unlike most ASCs, hospitals have infection control professionals responsible for collecting such data. The lack of such an individual presents a challenge to the feasibility of collecting either type of data, especially when such data are technically complex or the data collection processes are labor intensive. Employing an infection control professional would require ASCs to devote time and resources to an area that they have traditionally thought to be low risk. The experts we interviewed generally agreed that collecting process data on HAIs in ASCs is more feasible and potentially more useful than collecting data on outcomes. Several experts said it was more feasible to collect data on HAIs by focusing on process measures rather than outcome measures because unsafe practices may be observed with less effort and technical training than is needed to identify individual cases of HAIs. A CMS official reported that because of the relatively short time that patients are in the facility, the ASC environment lends itself well to the methodology of tracing a patient through his or her entire experience at the ASC as a means for observing specific practices, such as those related to infection control. The experts also noted that gathering such process data could provide useful guidance to ASCs. For example, such data could point to areas for specific remedial training on preventive activities, such as training on the proper use of single-dose vials and the appropriate procedures for sterilizing equipment. The increasing volume of procedures and evidence of infection control lapses in ASCs create a compelling need for current and nationally representative data on HAIs in ASCs in order to reduce their risk. Because HAIs generally only occur after a patient has left an ASC, data on the occurrence of these infections—outcome data—are difficult to collect. But data on the implementation of CDC-recommended infection control practices—process data—in ASCs can be collected more easily and can provide critical information on why HAIs are occurring and what can be done to help prevent them. One federal data source, the ASC pilot study, has shown the potential for using process data to increase the understanding of HAIs in ASCs. The pilot study tested the addition of an infection control assessment tool to collect detailed data on recommended practices during the course of a CMS standard survey. With the tool, specially trained state surveyors were able to identify serious lapses in recommended practices. Such lapses, which increased patients’ risks of developing HAIs, had not previously been detected through CMS’s standard surveys. The pilot study had the added benefit of not relying on ASCs to submit HAI data themselves with their limited staff resources. The results of the ASC pilot study demonstrate the feasibility of collecting data on the prevalence of specific infection control practices while conducting surveys of ASCs. Although detailed analyses of the data obtained during the pilot by CDC and CMS are pending, officials in the three pilot states and at CMS uniformly reported positive assessments of the process developed by CMS and CDC to collect these data during the course of standard ASC surveys by state surveyors. However, CMS has no plans to continue collecting such data following the completion of the ASC pilot surveys. If CMS and CDC do not build on their experiences with and analyses of the pilot to continue collecting such data from a subset of ASC surveys using an instrument such as the infection control assessment tool, then HHS is losing an opportunity to take advantage of the existing ASC survey process to collect information on the prevalence of infection control practices on an ongoing basis. Collecting detailed data on the prevalence of infection control practices is only part of what is needed to increase the understanding of the problem of HAIs in ASCs nationwide. The ability of HHS to use CMS’s standard survey process to collect nationally representative process data on infection control practices in ASCs and to make estimates about the prevalence of safe and unsafe infection control practices in ASCs nationwide also depends on introducing random selection for ASC surveys. The larger the number of randomly selected ASCs surveyed, the greater the precision that would be achieved for those results. For standard surveys, CMS currently selects those ASCs deemed most likely to have quality problems or that have not been surveyed within a given time interval and does not select any randomly from the national population of ASCs. However, CMS has recently expanded the number of ASC surveys that it conducts, and HHS could choose to have CMS select some ASCs randomly for standard surveys while continuing to target others. In determining the number of ASCs to be randomly selected, HHS could weigh the value of obtaining more precise information from a larger number of randomly selected ASCs against the value of targeting surveys to those ASCs that may be more likely to have quality deficiencies. HHS could determine the number of ASCs it would need to select at random to generate meaningful national estimates to help identify where lapses in infection control practices by ASCs across the country were most likely to be putting patients at risk of contracting HAIs. To obtain nationally representative and standardized information on the extent to which ASCs implement specific infection control practices that reduce the risk of transmitting HAIs to their patients, we recommend that the Acting Secretary of HHS develop and implement a written plan to use the data collection instrument and methodology tested in the ASC pilot study, with appropriate modifications based on the CDC and CMS analyses of that study, to conduct recurring periodic surveys of randomly selected ASCs. We provided a draft of this report to HHS for comment. In response, the Acting Administrator of CMS provided written comments, and we have reproduced these comments in appendix I. CMS also provided technical comments, which we have incorporated as appropriate. In its written comments, CMS stated that it concurred with our recommendation to HHS. CMS stated that it would use the results from the pilot study to evaluate the value and feasibility of incorporating the infection control assessment tool into the standard ASC survey process. The agency stated that if its evaluation resulted in a decision to use the infection control survey tool on an ongoing basis, then it would explore with CDC whether CDC would be able to continue to provide training and data analysis of the completed infection control assessment tools, as CDC did for the pilot study. Given such support from CDC, CMS stated that it would be willing to establish a process for randomly selecting at least some ASCs in each state for ASC surveys. We agree that implementing our recommendation requires analysis of the pilot study to determine appropriate modifications to the data collection tool and collaboration within HHS. However, given the risks of HAIs in ASCs and the compelling need for current and nationally representative data on them, it is important that the department follow our recommendation to develop and implement a written plan to ensure that it collects such data using recurring periodic surveys of randomly selected ASCs. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Acting Secretary of HHS and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or bascettac@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, key contributors to this report were William Simerl, Assistant Director; Jennel Harvey; Eric Peterson; Roseanne Price; and Andrea E. Richardson. | Health-care-associated infections (HAI) are a leading cause of death. Recent high-profile cases of HAIs in ambulatory surgical centers (ASC) due to lapses in recommended infection control practices may indicate a more widespread problem in ASCs, but the prevalence of such lapses is unknown. The Department of Health and Human Services' (HHS) Centers for Medicare & Medicaid Services (CMS) and other entities collect data on HAIs, including process data on the use of recommended practices and outcome data on HAI incidence. CMS conducts standard surveys on about half of ASCs every 3 to 4 years, assessing compliance with its standard on infection control. In this report, GAO examines the availability of data on HAIs in ASCs nationwide. GAO interviewed subject-matter experts, agency officials, and trade and professional group officials. Disparate sources of data on HAIs in ASCs are available, but none provide information on the extent of the problem nationwide. Such data are useful for guiding federal policies aimed at preventing the lapses in infection control practices--such as reusing syringes and drawing medication to be injected into multiple patients from single-dose vials--that can lead to increased risk of HAIs for patients. GAO identified five data sources--two operated by HHS, two by professional organizations, and one by a state government--all of which differ from one another in the type of HAI information they collect. In order to make nationwide estimates of HAIs and lapses in related infection control practices in ASCs, a data source would need to collect its data from a nationally representative random sample of ASCs. However, none of the five sources does so. The two professional organizations and the state source collect data from narrowly defined subsets of ASCs. The most detailed data are provided by the two federal sources, one of which collects outcome data and the other process data. Experts GAO interviewed said it was more feasible for ASCs to collect process data than outcome data. The Centers for Disease Control and Prevention's (CDC) National Healthcare Safety Network collects detailed, standardized data on HAI outcomes that are comparable across hospitals and other health care facilities, but it has only recently begun to collect data on ASCs and it is not set up to collect nationally representative data. The other HHS data source, a CMS ASC pilot study conducted in three states, collects detailed process data on practices that affect the risk of HAIs. The pilot study tested the application of two innovations--a CDC-developed infection control assessment tool and direct observation by the surveyor of a single patient's care from start to finish of the patient's stay--during the course of CMS's standard surveys of selected ASCs. These innovations allowed surveyors to identify serious lapses in CDC-recommended infection control practices that would not have been detected during CMS's standard surveys of selected ASCs. A CMS official told GAO that CMS officials would consider making changes to CMS's standard survey process after reviewing planned CMS and CDC analyses of the pilot study results but did not expect to collect standardized quantitative data on the extent of compliance with specific infection control practices using a data collection instrument, as was done with the assessment tool for the pilot. Even if CMS were to continue the pilot's data collection methods, the data would not be generalizable to ASCs nationwide--and thus could not provide information on the extent of the lapses--because ASCs are selected for surveys on the basis of their perceived risk for quality issues and the length of time since they were last surveyed, rather than through random selection. A random sample--the size of which CMS could determine--could generate national estimates that would identify those infection control practices where lapses by ASCs across the country were most likely to put their patients at risk of contracting HAIs. |
The scope of equipment reset efforts that will be required as a result of ongoing operations related to Operation Iraqi Freedom (OIF) and Operation Enduring Freedom (OEF) is enormous. The services have committed a significant amount of equipment to these operations. From 2003 until April 2005, the Army had deployed more than 40 percent of its equipment in support of OIF and OEF. As of March 2005, the Marine Corps had about 22 percent of its total fleet assets engaged in Iraq. Recently, the Marines estimated that approximately 40 percent of all Marine Corps ground equipment, 50 percent to 55 percent of communications equipment, and 20 percent of aircraft assets are in use in support of current operations. According to the Army, reset comprises a series of repair, recapitalization, and replacement actions to restore units’ equipment to a desired level of combat capability commensurate with mission requirements and availability of resources. The purpose of reset is to bring unit equipment to combat-ready condition, either for the unit’s next rotation in support of current operations or for other, unknown future contingencies. The Army’s standard level of maintenance is known as 10/20. This standard requires that all routine maintenance be executed and all deficiencies be repaired. Equipment at less than the 10/20 standard can be fully mission capable, which means there are no critical maintenance deficiencies as outlined in the technical manuals and instructions, and no safety deficiencies. Unit commanders have the authority to supersede the technical manuals and declare a system fully mission capable even though it has a non-mission capable deficiency. The Marine Corps’s equivalent term is “mission capable.” The Army’s reset strategy for ground vehicles includes an additional set of maintenance procedures known as Delayed Desert Damage (3D) which are designed to address damage that results from these vehicles operating in a desert environment. These procedures are designed to address damage that might otherwise not be visible. These 3D checks are initially performed at the unit level. Equipment that goes to a depot is subjected to more extensive 3D maintenance procedures. Army aviation equipment is subject to Special Technical Inspection and Repair (STIR). Similar to 3D, this maintenance is designed to address damage caused by operation in a desert environment. STIR also includes other routine maintenance. Although the terms may be slightly different, the Marine Corps equipment repair and replacement process and equipment standards parallel the Army process and standards for equipment maintenance. The Marine Corps equivalent to the Army’s reset process is termed “recovery.” Marine Corps equipment returning from combat theaters is evaluated and transported to either a maintenance depot or to a Marine Corps unit’s home station for repair. The Marine Corps’s equipment recovery process entails restoring all equipment used in Global War on Terror (GWOT) operations to its pre-GWOT condition. For equipment in the Marine Corps prepositioning fleet, this means restoring to a “like new condition,” for all other equipment, this means is restoring to a mission capable status. The Marine Corps also applies procedures similar to the 3D as appropriate. The Department of Defense (DOD) reported in April 2005 that they expected a new set of protocols to emerge based on experience with equipment used in OIF and OEF. These protocols may be similar to 3D and STIR which emerged as maintenance procedures based on experience from Operation Desert Storm. DOD, as part of its ongoing effort to assess stress on equipment, plans to look for unusual wear patterns and methods to address them as well as examining maintenance trends. Depot maintenance is defined as the highest level of maintenance activity, where the most complex maintenance work is done, from overhaul of components to complete rebuilds. Military depots and defense contractors throughout the United States perform depot-level maintenance. In response to the harsh operating environments in Iraq and Afghanistan and the unanticipated and prolonged length and pace of sustained operations, the Army and Marine Corps have developed and implemented several initiatives to equip their forces and maintain extensive amounts of equipment in theater. Specifically, the Army and Marine Corps have implemented initiatives to keep large amounts of unit equipment in theater after the units redeploy to their home stations in the United States for the purpose of rapidly equipping follow-on units, and have developed additional maintenance capacity in theater above the unit level to sustain major equipment items such as high mobility multi-purpose wheeled vehicles (HMMWVs), other tracked and wheeled vehicles, and aviation equipment. Environmental factors such as heat, sand, and dust have taken their tolls on major equipment items. In addition, as we have previously reported, the Army and Marine Corps are operating equipment at a pace well in excess of their normal peacetime levels, which is generating a large operational maintenance and replacement requirement that must be addressed when the units return to their home stations. Continued operations have increased the operational tempo for a great deal of Army and Marine Corps equipment. In April 2005, the Department of Defense (DOD) reported Army equipment usage rates averaged two to eight times that of peacetime rates. Senior Marine Corps officials recently testified that the Marine Corps usage rates for ground equipment in ongoing operations were four to nine times that of peacetime rates. Despite these high usage rates, the deployed Army units have generally reported high levels of overall readiness and relatively high levels of equipment readiness. Deployed Marine Corps units, however, report more degraded levels of overall and equipment readiness. Unit commanders in both services are able to subjectively upgrade their overall readiness ratings, although this has been done to a lesser extent by the Marine Corps. Absent such upgrades, overall readiness levels (particularly for the Army) would be significantly lower as a result of units’ low levels of equipment and supplies on hand. To meet ongoing operational requirements, the Army and Marine Corps have developed and implemented initiatives to concentrate equipment in theater. When the Army initially developed its strategy of retaining equipment from redeploying units in theater, it did not envision this to be a long-term mechanism for managing equipment needs, but rather a short- term measure to conserve transportation assets and, more importantly, ensure that units were rapidly equipped. The Marine Corps, like the Army, developed a similar equipment management initiative. Additionally, the Army has developed a pool of equipment in theater to expedite the replacement of equipment damaged during these operations, referred to as theater sustainment stocks (TSS), which includes, for example, tanks, HMMWVs, Bradley Fighting Vehicles, and support vehicles. As of January 2006, TSS included an estimated 400 different types of vehicles and other equipment. The Marine Corps recently testified that they have developed a similar pool of ground equipment known as Forward In-Stores to replace damaged major equipment items. To ensure that deployed units receive required amounts of equipment critical for their missions, the Army has designated certain major equipment items, such as add-on-armor vehicles, up-armored HMMWVs, selected communications and intelligence equipment, and other items deemed critical for OIF and OEF missions as “theater provided equipment” (TPE). According to Army officials, based on operational decisions, these theater-specific items are being left in theater because these are force protection items. This equipment is taken from active, Guard, and Reserve forces when they return to the United States and is retained in theater to hand off to follow-on units. TPE includes equipment such as armored vehicles, individual soldier body armor, and equipment used to counter improvised explosive devices. As of November 2005, the Coalition Forces Land Component Commander estimated that there were approximately 300,000 equipment items in the TPE inventory in Iraq, including more than 26,000 vehicles. The Army’s TPE initiative began in late 2003, when the first Army units were directed to leave equipment in theater, then known as “stay behind equipment.” The Army, in November 2005, replaced the term “stay behind equipment” with the term TPE to better manage equipment accountability and also reflect items that were procured directly for the theater. Unlike other less intensely managed equipment items, TPE is transferred directly from units leaving the theater to deploying units taking their place. In most cases, these transfers take place at the unit’s forward station in Iraq. As a result, most of this equipment has been in heavy use in harsh desert and combat conditions since it was first left in theater by the units that originally deployed with the equipment. Because TPE is maintained at the unit level, this strategy has not provided the Army with an opportunity to periodically rotate TPE back to the United States for depot level maintenance. As discussed in a later section, keeping large amounts of equipment in theater for long periods of time without the opportunity for depot-level repair has created a number of related consequences. The Marine Corps, like the Army, has directed that equipment necessary for OIF and OEF operations remain in theater. Because many Marine Corps mission requirements have been exceeding the unit’s typical combat equipment allowances, Marine Corps commanders in theater have developed expanded equipment packages for deploying units that are designed to ensure that units have the required equipment for their missions. Deploying Marine Corps units fall in on and assume custody of equipment left by other units departing the theater. According to recent Marine Corps testimony, this initiative allows it to provide the best equipment possible to forces in theater while also reducing equipment rotation costs. Marine Corps officials estimated they had deployed about 30 percent of its ground equipment, and 20 percent of aviation assets in support of ongoing operations. However, the percentage of ground equipment deployed in support of operations has been as high as 40 percent according to recent Marine Corps testimony. While this initiative has met equipment needs to date, it has caused some major equipment items to remain in constant operation, often in harsh desert conditions. To address the effects of the harsh operating environments and the maintenance needs of rapidly deteriorating equipment that is being held in theater for extensive periods, the Army and Marine Corps have developed initiatives to increase the maintenance capacity in theater to be able to provide near-depot level repair capabilities. For example, the Army has developed a refurbishment facility for HMMWVs in Kuwait and a Stryker maintenance facility in Qatar to limit the repair time and resupply time of these assets. The HMMWV refurbishment facility in Kuwait began operations in July 2005 and is operated by a defense contractor. The primary objective of this refurbishment facility is to mitigate the effects of high mileage, heavy weights, high temperatures, and lack of sustained maintenance programs. The HMMWV refurbishment facility workload includes refurbishment maintenance, as well as modernization and upgrades. As of December 2005, this facility had refurbished a total of 264 HMMWVs. Similarly, the Marine Corps created a limited aircraft depot maintenance capability in theater. Additionally, both the Army and Marine Corps have taken other steps to increase maintenance capacity and the availability of spare parts in theater. For example, at the time of our visit to Kuwait in January 2006, the Army was developing plans to increase the maintenance capacity at contractor maintenance facilities in Iraq. In addition, according to recent Army testimony, the Army Materiel Command (AMC) and the Defense Logistics Agency have taken steps to allow the rapid delivery of critical, low-density parts to the theater to maximize their availability and minimize transportation costs. The Marine Corps has also recently testified on efforts to leverage Army ground depot maintenance capabilities in the theater, and developed a rotation plan for major equipment items. Although the Army and Marine Corps are reporting high rates of equipment readiness for combat units and have developed and implemented plans to increase the maintenance capabilities in theater, these actions have a wide range of consequences and issues. The services have made a risk-based decision to keep equipment in theater, to forego depot repairs, and to rely almost exclusively on in-theater repair capabilities to keep equipment mission capable. As a result, much of the equipment has not undergone higher level depot maintenance since the start of operations in March 2003. While Army officials noted that not all equipment would undergo full depot-level maintenance, much of this equipment has incurred usage rates ranging from two to nine times the annual peacetime rate meaning that, in some cases, some equipment may have added as much as 27 years of use in the past three years. Continued usage at these rates without higher levels of maintenance could result in the possibility that more equipment will require more extensive and expensive repairs in the future or may require replacement rather than repair. Because most equipment is staying in Iraq, there are other ramifications that have implications for the depots in the United States such as the fact that depots are not operating at full capacity and that the scope of depot repair work is being reduced to meet operational needs. In addition, other maintenance issues are beginning to surface, which could have a variety of consequences such as a decrease in near-term and long- term readiness of equipment or an increase in repair or replacement costs. These additional issues include questions regarding contractor performance for in-theater maintenance and the condition and availability of the Army’s TSS in Kuwait. Many of the equipment items used in Southwest Asia are not receiving depot-level repair because they are being retained in theater or at home units and the Army has scaled back on the scope of work performed at the depots. As a result, the condition of equipment items in theater will likely continue to worsen and the equipment items will likely require more extensive repair or replacement when it eventually returns to home stations. The Army retains equipment in theater to support ongoing operations. For example, as of November 2005, the Army had about 300,000 pieces of equipment retained in theater to support troop deployment rotations. Very little of this equipment is being returned from theater to depots in the United States for repair. Instead, redeploying units are expected to maintain their assigned equipment to a fully mission capable condition to facilitate the transfer of equipment to deploying units. Since TPE is transferred directly from units leaving the theater to deploying units taking their place, usually at the units’ forward station in Iraq, the strategy has not allowed the equipment to receive periodic depot- level maintenance. Further, some units have commented that the TPE they received, while operable, requires higher levels of maintenance. The fully mission capable definition is to some extent a broad and malleable term. Unit commanders have reported concerns with downtimes, availability of spare parts, repair and replacement of damage or combat losses, and the need for additional contractor support. The Army is also reconfiguring its prepositioned equipment set and consequently is retaining some deploying units’ equipment in theater to support this Army Prepositioned Set, Kuwait (APS-5) reconstruction. For example, according to officials at the U.S. Army Forces Command, approximately 13,000 pieces of equipment from a redeploying unit were transferred to prepositioned stocks in Kuwait instead of returning to the United States with the unit. This included about 7,000 tactical wheeled vehicles. While this equipment is supposed to be reset to a 10/20 standard before being transferred to prepositioned equipment stocks, it is not being returned for depot overhaul. According to Army officials, this equipment was not returned for depot overhaul because of short timeframe requirements. This equipment was reset to a fully mission capable standard. In some instances, Army units retain equipment to reconstitute their unit quickly rather than send this equipment to depot for overhaul. According to officials in the Office of the Secretary of Defense, warfighters are not readily willing to give up equipment, which contributes to fewer equipment items being returned to the depots for repair. Officials at the U.S. Army Forces Command and at army depots echoed this concern, stating that availability of assets to induct into the depot repair program is limited by units’ need and desire to have equipment available for training. These officials added that the units fear that they will have to wait for replacement equipment because their unit priority is not high enough within the Army to ensure immediate replacement of the equipment items. To increase the number of equipment items going to depots from units, the Army created a list of equipment that it will now require units to automatically send to the Army depots for reset. The list is based on lessons learned from earlier experiences that damage and wear to certain types of equipment items used in Southwest Asia require more extensive depot level repairs. For example, some equipment reset at the units’ home station was failing at higher than expected rates in theater during follow- on deployments. The list contains about 200 equipment items and has been updated several times, most recently in October 2005, to include items such as the Bradley Fighting Vehicle and the Abrams Tank. According to the implementing memorandum, unit commanders are required to nominate a minimum of 25 percent of the listed equipment for return to depots for reset. According to the memorandum, the intent is to provide units the flexibility to maintain equipment for training while placing the maximum possible into reset programs, and items retained for training are to be maintained in fully mission capable condition. Because the services are retaining most equipment in theater, depots in the United States, tasked with complex maintenance work above and beyond in-theater maintenance reset, are not operating at full capacity. For example, DOD has estimated that Army depots can produce about 19 million direct labor hours of production on a single shift basis—8 hours a day, 5 days a week. Based on this measure, the Army depots are currently utilized at about 110 percent of capacity. However, according to depot officials, the Army could double or triple depot capacity by adding more work shifts at the depots. Using this multiple shift approach the Army could produce up to approximately 57 million direct labor hours of production or 170 percent more than the current workload at Army depots. Army depots are currently using some second shifts; however, second shifts are primarily limited to manufacturing process shops such as cleaning, machining, sand-blasting and painting, which depot officials say could easily be contracted out to increase throughput. According to depot officials, the factors that impact their decision to add more shifts and increase throughput are a stable commitment of funding throughout the year, the availability of retrograde equipment to repair, and the right mix of spare parts inventory to support production. In addition, the Army has reduced the scope of work performed on some equipment items to less than a full overhaul. According to U.S. Army Tank and Automotive Command (TACOM) officials, the Army cannot afford to do a full overhaul of its ground equipment and has therefore made a risk- based decision to perform a reduced scope of work for equipment at the depots. To determine what the repair scope should be, the Army focused on major readiness components on the vehicles. For example, the engine on the Abrams tank is the component that fails the most often and is the most expensive to replace. Consequently this was the number one component included in the reduced scope of depot repair work. The less robust depot level repair being performed speeds repair time and reduces expenditures on depot repair. For example, the reduced scope of work on the Abrams costs approximately $880,000 versus $1.4 million for a complete overhaul. This scope does not include complete disassembly of the vehicle and identifies 33 items to be inspected and repaired only if necessary. During a full overhaul these items would be reconditioned to like new condition, and consequently would be less likely to fail after the depot visit although it is unclear what actual failure rates might be. According to TACOM officials, the reduced overhaul represents what the Army can afford to do. The Marine Corps recently instituted an annual equipment rotation plan to begin returning equipment from Southwest Asia to the United States for reset. The first of this returning equipment was received in the first quarter of fiscal year 2006. Previously, Marine Corps reset strategy was to overhaul equipment located in the United States, then provide the equipment to deploying units to fill requirements that could not be satisfied with the pool of mission capable equipment in theater. According to depot officials, the Marine Corps found it necessary to begin returning equipment from the theater because it is running short of available equipment in the United States for depot overhaul. However, depot officials told us that the equipment returning from theater is in much worse condition than they anticipated so they may not be able to reset as many vehicles as planned with available reset funds. While we did not review copies of the contracts, our review of other Army documents and discussions with Army officials identified two examples to indicate that maintenance contractors are not meeting performance expectations. Army officials estimated that about 70 percent of equipment maintenance in theater above the unit level is being done by contractors. Some of these contractors have experienced a number of problems in the past few years, such as not being able to quickly acquire skilled maintenance personnel. Specifically, we identified a number of maintenance issues regarding the HMMWV refurbishment facility in Kuwait and the reset of equipment in the prepositioned set of equipment in Kuwait. As of January 2006, according to Army maintenance officials in Kuwait, the contractor operating the HMMWV refurbishment facility in Kuwait had not been able to meet original production goals. In some cases, for example, the contractor’s actual labor requirements for some vehicles exceeded the original estimates by almost 200 percent. This contributed to the facility falling over 200 vehicles short of its output goal of refurbishing 300 vehicles per month since the facility became operational in July 2005. Also cited as contributing to the facility’s poor performance were difficulties the contractor experienced in obtaining the required number of third country national workers, mostly due to difficulties meeting host country visa requirements. Furthermore, according to Army maintenance officials in Kuwait, during the first 6 months the facility was operational, the contractor repeatedly failed to gather data on resources expended on vehicle refurbishments. Without accurate information on the actual level of resources required to refurbish these vehicles, it will be more difficult for the contractor to estimate and plan for future requirements. Since the original contract was issued in April 2005, it has been modified multiple times, increasing the total funding requirement from slightly more than $36 million over the contract’s first year of performance, an increase of over 100 percent. In addition to concerns about the contractor management of the HMMWV refurbishment facility, theater commanders have also expressed concerns about contractor performance in support of efforts to reset equipment for reconfiguring Army prepositioned stocks. The Army has contracted for the maintenance and management of Army prepositioned equipment in Kuwait. The Army has recently noted several concerns about contractor performance in the areas of personnel and maintenance. For example, there is a shortage of contractor personnel which contributes significantly to a decline in production. The contractor also attributed the shortages to difficulties obtaining the required number of third country national workers due to problems with host country visa requirements. The Army had to resort to acquiring additional vehicle mechanics and supply personnel from another contractor and an active duty Army unit and an Army maintenance company. The Army also reports that the contractor does not conduct thorough technical inspections. If thorough inspections were conducted it would significantly reduce the amount of time the equipment spends in maintenance shops. According to officials at the U.S. Army Field Support Command, equipment is often rejected because of the contractor’s lack of attention to detail and inadequate maintenance inspection procedures. The condition of TSS is not sufficient to replace battle damaged equipment without additional maintenance, which may delay the equipment’s availability and strain in-theater maintenance providers. The purpose of TSS is to ensure that equipment is on hand to quickly fill unit requirements that may arise due to battle damage or other losses. The Army created this stockpile of equipment in Kuwait as a quick source to provide replacement equipment, as needed. As of January 2006, an AMC official responsible for TSS estimated that there were approximately 174,000 pieces of equipment in Kuwait and Qatar, representing 400 different types of equipment. TSS includes, for example, tanks, HMMWVs, Bradley Fighting Vehicles, and support vehicles. Expected loss rates are taken into consideration in setting TSS equipment levels. When a requirement arises in Iraq, equipment items are taken from TSS, maintenance is performed in theater to ensure the equipment is in suitable condition, and it is sent to units. Much of TSS requires additional maintenance before it can be reissued to operational units in Iraq and, in some cases, to restore it to fully mission capable. For example, as of January 2006, for a cross-section of several types of ground vehicles in TSS, less than 7 percent were fully mission capable. As such, TSS that requires additional maintenance before it can be reissued as replacement equipment increases requirements on the in- theater maintenance capability, which may affect other efforts to refurbish equipment in theater for prepositioned stocks. The Army Field Support Battalion at Camp Arifjan, Kuwait, is responsible for the management and reconstitution of prepositioned stocks, the management and repair of TSS in support of ongoing requirements, as well as a number of other logistics missions. The same contract workforce the Army Field Support Battalion employs for maintenance on prepositioned stocks is responsible for maintenance of TSS. The capacity of the Army Field Support Battalion to conduct reset of equipment being used to reconstitute prepositioned stocks in Kuwait is directly affected by ongoing requirements to manage TSS and is affected by other missions in support of deployed units in Iraq. The Army and Marine Corps will face a number of ongoing and longer- term challenges that will affect the timing and cost of equipment reset. As previously mentioned, current military operations are taking a toll on equipment, which will affect the cost of repairing equipment as well as the amount and cost of equipment that will need to be replaced. In addition, other issues such as the Army and Marine Corps efforts to modularize and transform their forces, respectively, the reconstitution and reset of prepositioned equipment, and the ongoing and longer-term efforts to replace equipment from the active, National Guard, and Reserve units, as well as the potential transfer of U.S. military equipment and potential for continuing logistical support to Iraqi Security Forces will also affect the timing and cost of reset. Furthermore, both the Army and Marine Corps will have to better align their funding and program strategies to sustain, modernize, or replace existing legacy equipment systems. Similarly, both services will need to face difficult choices for the many competing equipment programs. Finally, working with the Congress, both services will have to determine the best approaches for dealing with the issues created by the timing of depot maintenance supplemental appropriations. The Army’s and Marine Corps’s equipment reset programs will also have to compete with ongoing and planned force structure changes designed to provide more flexibility in deploying forces for ongoing and future operations. The Army began its modular force transformation in 2004 to restructure itself from a division-based force to a modular brigade-based force. The modular forces are designed to be stand-alone, self-sufficient units that are more rapidly deployable and better able to conduct joint and expeditionary operations than their larger division-based predecessors. Modular restructuring will require the Army to spend billions of dollars for new equipment over the next several years while continuing to reset and maintain equipment needed for ongoing operations. The Army estimates that the equipment costs alone will be about $41 billion. In addition to creating modular units, the Army plans to continue to develop and fund the Future Combat System, which the Army recognizes is one of the greatest technology and integration challenges it has ever undertaken. The Marine Corps has also initiated force structure changes to provide flexibility in deploying troops, which will also likely affect the Marine Corps’s equipment reset strategies. Its force structure initiative is designed to reduce the effects of operational tempo on the force and reshape the Marine Corps to best support current and future operations. In 2004, the Marine Corps conducted a comprehensive force structure review to determine how to restructure itself to augment high demand, low density capabilities, reduce deployed tempo stress on the force, and shape the Marine Corps to best support the current and future warfighting environments. Both the Army and Marine Corps drew heavily upon prepositioned stocks for operations in Iraq and Afghanistan. As we reported in September 2005, DOD faces some near term operational risks should another large scale conflict emerge, because it has drawn heavily on prepositioned stocks to support ongoing operations in Iraq. And although remaining stocks provide some residual capability, many of the programs face significant inventory shortfalls and, in some cases, maintenance problems. The focus of the Army’s current prepositioned equipment reset program is building two brigade-sized equipment sets in Kuwait, as well as battalion- sized sets in Qatar and Afghanistan. Prepositioned stocks in Kuwait are not designated to serve as a pool of equipment available to support current missions. Equipment to form these sets is coming from a combination of equipment left in theater, as well as equipment being transferred from U.S. depots and from units around the world. While a sizeable portion of the needed equipment is now in place, much of this equipment needs substantial repair. Maintenance facilities are limited as are covered storage facilities. Lack of covered storage facilities presents yet another challenge. Prepositioned stock, like TSS, is stored in the open desert environment, which in some cases may lead to further degradation. Harsh environmental conditions such as sand and high humidity levels accelerate equipment corrosion, which may not be apparent until extensive depot maintenance is performed. We have previously reported that outdoor storage aggravates corrosion and the use of temporary shelters with climate-controlled facilities is cost effective, has a high return on investment, reduces maintenance and inspections and, as a result, increases equipment availability. The Marine Corps has also drawn on a significant portion of its prepositioned stocks from five ships to support current operations. It is unclear when this equipment will be returned to prepositioned stocks because much of this equipment will be left in Iraq to support the continuing deployment of Marine Corps forces there. Our September 2005 report also raised serious concerns about the future of the department’s prepositioning programs, and we believe these concerns are still valid. No department-wide strategy exists to guide the programs, despite their importance to operational plans as evidenced in OIF. Without an overarching strategy, the services have been making decisions that affect the future of the programs without an understanding of how the prepositioning programs will fit into an evolving defense strategy. The Army’s decision to accelerate the creation of substantial combat capabilities in Southwest Asia is understandable because it could speed buildup in the future, especially if large numbers of troops are withdrawn. However, the Army’s decisions in other parts of its prepositioning programs are questionable. For example, the Army recently decided to cut its afloat combat capability in half (from two brigade sets to one) by the end of fiscal year 2006 as a result of a budget cut from the Office of Secretary of Defense. However, internal planning documents that we reviewed indicated that the Office of Secretary of Defense directed terminating a planned third set afloat, cutting an existing capability that would likely be critical to responding to another crisis should it occur. In the meantime, the Army is making plans to reduce its contractor workforce in Charleston, South Carolina, where it performs the maintenance on its afloat stocks. At the same time, in Europe, the Army has a $55 million military construction project well underway at a site in Italy, but the Army’s draft prepositioning strategy identifies no significant prepositioning mission in Europe. In our discussions with Army managers, they told us they are planning to use the Italian workforce to perform maintenance on equipment that ultimately will be placed afloat in 2013 or later. The Army and Marine Corps must also plan for replacement of active, National Guard, and Reserve equipment left in theater to support ongoing operations. In late 2003, the Army began to direct redeploying Guard and Reserve units to leave their equipment in theater for use by deploying forces. As we have previously testified, DOD policy requires the Army to replace equipment transferred to it from the reserve component including temporary withdrawals or loans in excess of 90 days, yet the Army had neither created a mechanism in the early phases of the war to track Guard equipment left in theater nor prepared replacement plans for this equipment, because the practice of leaving equipment behind was intended to be a short-term measure. As of March 2006, only three replacement plans have been endorsed by the Secretary of Defense, all to replace Guard equipment, while 33 plans are in various stages of approval. Lack of equipment for the active, Guard, and Reserve forces at home stations affects the ability of the forces to conduct unit training, and adversely affects the ability of the Guard and Reserve forces to be compatible with active component units. As operations have continued, the amount of Guard equipment retained in theater has increased, which has further exacerbated the shortages in nondeployed Guard units. For example, when the North Carolina 30th Brigade Combat Team returned from its deployment to Iraq in 2005, it left behind 229 HMMWVs, about 73 percent of its pre-deployment inventory of those vehicles, for other units to use. Similarly, according to Guard officials, three Illinois Army National Guard units were required to leave almost all of their HMMWVs, about 130, in Iraq when they returned from deployment. As a result, the units could not conduct training to maintain the proficiency they acquired while overseas or train new recruits. In all, the Guard reports that 14 military police companies left over 600 HMMWVs and other armored trucks, which are expected to remain in theater for the duration of operations, which according to Army officials, would be required regardless of Guard, Reserve, or active unit. Lack of equipment for training also adversely affects Marine Corps units. For example, in the interest of supporting units in theater by leaving certain pieces of equipment in theater and drawing on equipment from elsewhere to meet theater needs, the Marine Corps has experienced home station equipment shortfalls, among both active and reserve components. According to a senior Marine Corps official, these shortfalls may have detrimental effects on the ability of the Marine Corps to train and to respond to any contingencies. In addition, the Army has acknowledged that the benefits of prepositioned stocks are diminished when units are not trained on equipment that matches that present in the stocks. The Army and Marine Corps strategy for retaining and maintaining significant numbers of low density, high demand equipment items in theater will affect plans to replace equipment left in theater by the Guard and Reserve. We have previously reported that to meet the demand for certain types of equipment for continuing operations, the Army has required Army National Guard units returning from overseas deployments to leave behind many items for use by follow-on forces. According to the National Guard and Reserve Equipment Report for Fiscal Year 2007, the Army National Guard has been directed to transfer more than 75,000 pieces of equipment valued at $1.76 billion, to the Army to support OIF and OEF. However, the Army does not have a complete accounting of these items or a plan to replace the equipment, as DOD policy requires. The Army expects that these items will eventually be returned to the Guard, although the Guard does not know whether or when the items will be returned. We have also previously reported that like the Army National Guard, Army Reserve units have been required to leave certain equipment items, such as vehicles that have armor added, in theater for continuing use by other forces. This further reduces the equipment available for training and limits the Army Reserve’s ability to prepare units for mobilizations in the near term. The Army is working with both the Army National Guard and the Army Reserve to develop memoranda of agreement on how equipment left in Iraq will be replaced. Until these plans are completed and replacement equipment provided, the Army Reserve and Army National Guard will face continuing equipment shortages while challenged to train and prepare for future missions. According to Marine Corps testimony, the policy of retaining equipment in theater to meet the needs of deployed forces has led to some home station equipment shortfalls, among both active and reserve units, which if allowed to continue could have a direct impact on the ability of Marine Forces to train for known and contingent deployments. Furthermore, according to the National Guard and Reserve Equipment Report for fiscal year 2007, more than 1,800 major Marine Corps equipment items, valued at $94.3 million have been destroyed, and an additional 2,300 require depot maintenance. Future requirements to transfer equipment and provide logistical support to the Iraqi Security Forces and the extent of required U.S. support are unclear. In its report to Congress in April 2005, DOD stated that the primary constraint on future maintenance processes is the lack of equipment that is available for reset and recovery activities. DOD noted that a large amount of equipment is being held in the theater as a rotational pool for deploying units, and will remain in theater for the long term. DOD noted that when hostilities cease, some of the equipment being held in theater may be turned over to Iraqi Security Forces, if authorized by law. In addition, some equipment will be scrapped and the rest would be assessed for maintenance. Military service officials have recently testified that some types of equipment may be left for Iraqi Security Forces, and cited concerns with supporting that equipment in the future. Until the determination of what equipment will be given to the Iraqi Security Forces is made, it will be difficult to determine what will be available for reset. As the United States military draws down its combat forces, any continued logistical support using equipment such as wheeled vehicles and helicopters will have to come from the Army or Marine Corps and will have to be factored into plans for reset and reconstitution. We have previously reported that, for certain equipment items, the Army and Marine Corps have not developed complete sustainment, modernization, and replacement strategies or identified funding needs for all priority equipment items such as the Army Bradley Fighting Vehicle and Marine Corps CH-46E Sea Knight Helicopter. Given that funding for the next several years to sustain, modernize, and replace aging equipment will compete for funding with other DOD priorities, such as current operations, force structure changes, and replacement system acquisitions, the lack of comprehensive equipment strategies may limit the Army’s and Marine Corps’s abilities to secure required funds. Furthermore, until the services develop these plans, Congress will be unable to ensure that DOD’s budget decisions address deficiencies related to key military equipment. We first reported in 2003 that the condition of 25 selected military equipment items varied from very good to very poor and that, although the services had program strategies for sustaining, modernizing, or replacing most of the items reviewed, there were gaps in some of those strategies. Since this report, DOD’s continued operations in Iraq and Afghanistan have resulted in additional wear and tear on military equipment. Given continued congressional interest in the wear and tear being placed on military equipment and the funding needed to reconstitute the equipment, we issued a follow up report in October 2005 in which we assessed the condition, program strategies, and funding plans for 30 military equipment items, including 18 items from our December 2003 report. With respect to these 30 selected equipment items, we identified that the military services had not fully identified near- and long-term program strategies and funding plans to ensure that all of these items can meet requirements. For many of the equipment items included in our assessment, average fleet wide readiness rates had declined, generally due to the high pace of recent operations or the advanced age or complexity of the systems. Although selected equipment items have been able to meet wartime requirements, the high pace of recent operations appears to be taking a toll on selected items and fleet wide mission capable rates have been below service targets, particularly in the Army and Marine Corps. For example, the Army’s Bradley Fighting Vehicle, Abrams Tank, and AH-64A/D Apache Helicopter, and the Marine Corps’s Light Armored Vehicle and Sea Knight Helicopter were assessed as warranting additional attention by DOD or the military services due to the high pace of operations increasing utilization beyond planned usage. Furthermore, according to officials, the full extent of the equipment items’ degradation will not be known until a complete inspection of the deployed equipment is performed. Marine Corps legacy aviation equipment in use faces special readiness challenges due to the increased usage rates coupled with the absence of new production of that equipment. Existing equipment must be maintained and managed to provide the warfighter with needed equipment until next generation equipment is constructed. We have recently reported severe problems or issues that warrant immediate attention by DOD or the military services with the near term program strategies and funding plans for the Marine Corps CH-46E Sea Knight Helicopter program due to anticipated parts shortages and maintenance issues, as well as potential problems with the readiness of Marine Corps M1A1 tanks, Light Armored Vehicles, and CH-53E helicopters stemming from the high pace of operations and increased utilization beyond planned usage. In recent Congressional testimony, Marine Corps officials discussed problems with a lack of active production lines for the CH-46 and CH-53 helicopters. Given that no replacement aircraft is available, as these platforms are lost in combat they cannot be replaced. The Marine Corps has requested funds in the fiscal year 2006 supplemental to bring CH-53E helicopters out of desert storage and refurbish them to replace those destroyed during current operations. The Army and Marine Corps will need to make difficult choices for competing equipment programs, such as Army modularity and equipment reset, when considering future equipment budget requests. While the services are working to refine overall requirements, the total requirements and costs are unclear and raise a number of questions as to how the services will afford them. The growing requirement for future equipment repair, replacement, and reset will only serve to exacerbate the problem. For example, based on our preliminary observations, the Army’s cost estimate, to create modular units has increased from $28 billion in 2004 to its current estimate of $52.5 billion. Of that $52.5 billion, $41 billion or 78 percent has been allocated to equipment. However, our preliminary observations also indicate that it is not clear how the Army distinguishes between costs associated with modularity and costs for resetting equipment used during operations. According to recent Army information, the Army’s requirement for equipment reset is more than $13 billion for fiscal year 2006. This includes funds to repair equipment in theater or at the depots, replace battle losses, and recapitalize equipment. In fiscal year 2006 alone, the Army estimated it would need to reset about 6,000 combat vehicles, 30,000 wheeled vehicles, 615 aircraft, and 85,000 ground support items. In addition, according to recent Marine Corps testimony, accurately forecasting the total cost to reset the force is dependent upon calculations of what percentage of current inventory in theater will be repairable or will need to be replaced, how much equipment may be left behind for Iraqi forces, and other determinations dependent on circumstances and conditions that cannot be easily predicted. The Army has also indicated that additional supplemental funding will be required for equipment reset for at least two years after hostilities cease. The Army and Marine Corps must consider these affordability challenges in the context of future fiscal constraints. The Army depots received their fiscal year 2005 supplemental in the June/July 2005 timeframe, at which time they began executing their reset workload. Subsequently, some of these funds were later pulled back by the AMC. According to AMC officials, the funds were pulled back from the depots for three reasons: (1) the depots could not complete the reset workload until several months after the end of fiscal year 2005, (2) the funds were needed to meet other Army-wide requirements, and (3) the Army wanted to avoid potential Congressional cuts to its fiscal year 2006 budget for depot carry over workload. In total, AMC pulled back $193 million, or about 10 percent of reset funds for fiscal year 2005 for Army depot maintenance. According to AMC officials, the command did not use these funds for contract depot maintenance, but rather gave them back to Army headquarters to meet other unfunded fiscal year 2005 operation and maintenance requirements. According to Army and Marine Corps depot officials, receipt of funds too late in the fiscal year does not allow timely execution of major item workload within the current fiscal year. Given the time it takes to preposition parts and materials (at best 60 days), plus the repair cycle time to complete repairs (approximately another 60 to 90 days for major items) there is basically little end item production to be achieved at the depot within the fiscal year the funding is received. Receiving the supplemental late in the year of execution reduced the amount of planned depot maintenance work for 2005. Depot officials anticipate that the condition may repeat itself in fiscal year 2006. For example, one Army depot reported that its planned fiscal year 2006 workload of 27 million direct labor hours will likely be reduced to 21 million hours, a reduction of 6 million, or 22 percent, of planned direct labor hours. Depot officials commented that the timing of the supplemental appropriations compounds problems depots have in efficiently managing their maintenance workload. The depots face the challenge of managing changes in funded requirements during the year of execution, obtaining the equipment they have programmed for overhaul, and ensuring that the right spare parts are purchased in advance of equipment overhauls. For example, in preparing its fiscal year 2006 supplemental budget request, AMC included the repair of HMMWVs at its depots. The depots planned accordingly to support this requirement. However, since the supplemental was submitted to Congress, the Army has requested that Congress shift $480 million in HMMWV reset funds to new procurement. This change has reduced the planned depot workload by almost 6,000 HMMWVs creating disruptions in depots’ workforce structure plans. Until the reduction, Red River depot anticipated hiring additional employees to perform the HMMWV and Bradley workloads, and Letterkenny Army Depot recently reduced its contract workforce by 150 employees due to declining work on the HMMWV and the Patriot missile system. Prior to the Global War on Terror, the Department of Defense, the Army, and the Marine Corps faced significant challenges in sustaining and modernizing legacy equipment as well as funding the procurements of replacement weapons systems. With the advent and continuation of military operations over the past several years in Afghanistan and Iraq, the challenges of sustainment and modernization of legacy weapons systems, and procurement of new and replacement weapons systems has been significantly exacerbated. The harsh operating environment and high operational tempo, coupled with the operational requirement to keep equipment in theater without significant depot repair, could lead to higher than anticipated reset costs and more replacements than repair of equipment. Although the precise dollar estimate for the reset of Army and Marine Corps equipment will not be known until operations in Iraq and Afghanistan cease, it will likely cost billions of dollars to repair and replace the equipment used. As the funding requirements increase over time, the Army and Marine Corps will be forced to make difficult choices and trade-offs for the many competing equipment programs. While the services are working to refine overall requirements, the total requirements and costs are unclear and raise a number of questions as to how the services will afford them. Until the services are able to firm up these requirements and cost estimates, neither the Secretary of Defense nor the Congress will be in a sound position to weigh the trade offs and risks. Mr. Chairman, this concludes my statement. I would be happy to answer any questions. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The United States is engaged in an unconventional war, not a war against military forces of one country, but an irregular war against terrorist cells with global networks. Operations Iraqi Freedom and Enduring Freedom are sustained military operations, which are taking a toll on the condition and readiness of military equipment that, in some cases, is more than 20 years old. The Army and Marine Corps will likely incur large expenditures in the future to reset (repair or replace) a significant amount of equipment when hostilities cease. The Army has requested about $13 billion in its fiscal year 2006 supplemental budget request for equipment reset. Today's testimony addresses (1) the environment, pace of operations, and operational requirements in Southwest Asia, and their affects on the Army's and Marine Corps's equipping and maintenance strategies; (2) equipment maintenance consequences created by these equipping and maintenance strategies; and (3) challenges affecting the timing and cost of Army and Marine Corps equipment reset. GAO's observations are based on equipment-related GAO reports issued in fiscal years 2004 through 2006, as well as ongoing related work. In response to the harsh operating environments in Iraq and Afghanistan, and the unanticipated and prolonged length and pace of sustained operations, the Army and Marine Corps have developed and implemented several initiatives to equip its forces and maintain the extensive amounts of equipment in theater. Environmental factors such as heat, sand, and dust have taken their toll on sensitive components. In addition, operating equipment at a pace well in excess of peacetime operations is generating a large operational maintenance and replacement requirement that must be addressed when units return to their home stations. To meet ongoing operational requirements, the Army and Marine Corps have developed pools of equipment in theater to expedite the replacement of equipment damaged during operations and directed that equipment necessary for OIF and OEF operations remain in theater. In response, the Army and Marine Corps have developed several initiatives to increase the maintenance capacity in theater to be able to provide near-depot level repair capabilities. Although the Army and Marine Corps are reporting high rates of equipment readiness and have developed and implemented plans to increase the maintenance capabilities in theater, these actions have a wide range of consequences. Many of the equipment items used in Southwest Asia are not receiving depot-level repair because equipment items are being retained in theater or at home units and the Army has scaled back on the scope of work performed at the depots. As a result, the condition of equipment items in theater will likely continue to worsen and the equipment items will likely require more extensive repair or replacement when they eventually return to home stations. The Army and Marine Corps will face a number of ongoing and long-term challenges that will affect the timing and cost of equipment reset, such as Army and Marine Corps transformation initiatives, reset of prepositioned equipment, efforts to replace equipment left overseas from the active, National Guard, and Reserve units, as well as the potential transfer of U.S. military equipment and the potential for continuing logistical support to Iraqi Security Forces. Also, both the Marine Corps and Army will have to better align their funding requests with the related program strategies to sustain, modernize, or replace existing legacy equipment systems. Finally, both services will have to make difficult choices and trade-offs when it comes to their many competing equipment programs. While the services are working to refine overall requirements, the total requirements and costs are unclear and raise a number of questions as to how the services will afford them. Until the services are able to firm up these requirements and cost estimates, neither the Secretary of Defense nor the Congress will be in a sound position to weigh the trade offs and risks. |
We developed the conceptual framework for fiscal exposures in 2003, in order to facilitate the discussion of long-term costs and uncertainties that present risks for the federal budget in the future. Fiscal exposures vary widely as to source, extent of the government’s legal commitment, and magnitude. Figure 1 illustrates the range of the legal commitment. Fiscal exposures may be explicit in that the government is legally required to fund the commitment, or implicit in that an exposure arises not from a legal commitment, but from current policy, past practices, or other factors that may create the expectation for future spending. Some exposures present elements of both explicit and implicit exposures. Insurance programs are a key example. If an event occurs, some payment is legally required; this represents an explicit exposure. There may be an expectation that the government will provide assistance beyond the program’s total available resources or budget authority (e.g., flood insurance payments made in response to a major disaster): this expectation represents an implicit exposure. 10GAO-03-213. protect households or individuals against certain social risks including loss of income. In contrast to other federal insurance programs, these programs are generally viewed as transfer payments, which are benefits provided without requiring the recipient to provide current or future goods or services of equivalent value in return. In this respect, they are different from pension and other employee compensation that is provided in exchange for services. from 10 percent of GDP in 2010 to 13.6 percent of GDP by 2030, absorbing an increasing share of federal revenue and reducing future budget flexibility. Should the amounts of funding needed to cover future benefits exceed the amounts available in corresponding trust funds, there may be an expectation that the government will use federal funds to pay the difference, even though there would be no legal commitment on behalf of the government to do so. Significant information on the estimated future spending for Social Security and Medicare is available; in addition to our long-term fiscal simulations, the Congressional Budget Office (CBO) and the administration regularly publish long-term fiscal simulations, which illustrate the expected growth in spending for these programs. Future spending for other implicit exposures can be more difficult to estimate. For example, the frequency and magnitude of declared disasters has increased in recent decades, resulting in billions of dollars of supplemental appropriations. FEMA has obligated over $80 billion in federal assistance for disasters declared during fiscal year 2004 through 2011. The unpredictable nature of such events makes estimating future spending difficult. 12GAO, The Federal Government’s Long-Term Fiscal Outlook: Spring 2013 Update, GAO-13-481SP, (Washington, D.C.: Apr. 11, 2013). These figures also include spending for other federal health programs delivered through the states – the Children’s Health Insurance Program and the subsidies available to assist individuals to purchase insurance coverage through the American Health Benefit Exchanges. 13See Congressional Budget Office, The 2012 Long-Term Budget Outlook (Washington, D.C.: Jun. 5, 2012) and Office of Management and Budget, Analytical Perspectives, Budget of the U.S. Government, Fiscal Year 2014 (Washington, D.C., Apr. 10, 2013). losses if an event occurs in the future, the generally cash-based measures used in the budget do not reflect the magnitude of the government’s legal commitment of future resources at the time decisions are being made. If the full cost of a spending decision is included in the budget when the decision is made, then decision makers can consider the total costs when setting priorities, comparing the cost of a program with its benefits, or assessing the cost of one method of reaching a specified goal with another. Decision makers’ ability to make informed choices would be improved by increased transparency regarding the impact of policy decisions on the expected path of spending and revenue. Expected future spending arising from some exposures is recognized in the financial statements, which report costs on an accrual basis. Generally, under financial accounting standards, liabilities are recorded for probable and measureable outflows of resources arising from past transactions and events. Also, under accounting standards, the cost of claims for federal insurance programs that are deemed probable and measurable is considered a liability; reasonably possible losses are disclosed in the notes to the financial statements. Therefore, some measures reported in the government’s financial statements can be useful indicators of future spending arising from certain fiscal exposures. 14See appendix I for information on the extent and estimated magnitude of these fiscal exposures. government to the extent there is an expectation that the government would step in and cover losses beyond the program’s reserves. For some exposures, the extent of the government’s legal commitment has changed over time. For example, the fiscal exposure created by Fannie Mae and Freddie Mac changed in recent years as the government responded to the financial crisis. Prior to 2008, securities issued by Fannie Mae and Freddie Mac were explicitly not guaranteed by the federal government and the government had no legal responsibility to provide support to these GSEs. However, in response to the financial crisis, the government placed them into conservatorship and agreed to provide temporary assistance, creating a new explicit exposure. The budget does reflect some or all of the cost of the government’s legal commitment for some exposures. For example, for crop insurance, budget authority is provided to cover the cost of premium subsidies, claims payments, and administrative expense subsidies. For military and civilian pensions, agencies are required to make contributions out of current budget authority to cover some of the costs of civilian and military pension benefits earned by current employees. These contributions reduce the funds available to each agency to fund other activities. However, the recognition is incomplete because the contributions are not set to cover the full cost. In addition, we found that the budget provides incomplete information or potentially misleading signals about today’s legal commitments for some other exposures as well. Our selected fiscal exposures generally demonstrate one or more of the following characteristics: The full cost of commitments incurred today is not recorded in the budget until the corresponding outlays are made in the future. For example, for veterans compensation and civilian post-retirement health benefits, the budget reflects benefit payments in a given year, instead of capturing the cost of benefits earned today that will need to be paid in the future. 16The Federal Housing Finance Regulatory Reform Act of 2008 established the Federal Housing Finance Agency (FHFA), which was created to enhance authority over these GSEs and provide the Secretary of the Treasury with certain authorities intended to ensure the financial stability of these GSEs. Treasury entered into a Senior Preferred Stock Purchase Agreement (hereafter referred to as the agreements) with each GSE. The agreements, which have no expiration date, provide that Treasury will disburse funds to these GSEs if at the end of any quarter the FHFA determines that the liabilities of either GSE exceed its assets. 17Budget authority is the authority to incur obligations and pay expenses. of an insured event, and the payment of claims may extend over several decades. Rather, the cost of pension insurance can be measured by the portion of risk assumed by the government that is not charged to the beneficiary—a “missing premium.” This subsidy is not recognized in the budget when the insurance is extended. The current budget treatment results in both incomplete information with regard to any missing premium, and in potentially misleading information about the program’s financial condition. For example, at the time budget decisions were being made for fiscal year 2013, the budget showed a positive budget estimate (i.e., net revenues) for PBGC of about $1.6 billion, suggesting that the program would help decrease the federal budget deficit that year. However, the financial reports available at the same time showed the program’s net position worsened by about $8 billion from the year before, principally as the result of incurred losses that are not yet reflected in the budget. These estimates provide significantly different pictures of the program’s health and its potential draw on future budget resources. While PBGC can legally only pay claims up to the amount of resources available in its revolving funds, in the face of major pension fund failures there might be an expectation that the government would pass legislation allowing PBGC to cover some or all of the gap. Dedicated resources are estimated to be insufficient to pay for the associated commitment. Some programs set aside amounts out of current budget authority to cover benefits earned today but payable in the future. For example, agency contributions and Treasury payments to cover military pension benefits accrued during the year are deposited in a budgetary trust fund; however, the trust fund balance is significantly less than expected future benefits payments, as measured by the accrued liability (see appendix I, figure 9). As such, the trust fund’s balance can provide a signaling function of future spending but does not necessarily represent the full cost of those legal commitments. Furthermore, some insurance programs do not have sufficient dedicated resources to cover expected costs. For example, the NFIP historically insured many properties at a subsidized rate. In addition, overall flood insurance premiums were designed to permit the program to cover losses and expenses in an “average historical loss year,” but not to cover high- loss years. Instead, the program was given the authority to borrow from the Treasury in such years, with the expectation that low-loss years would allow the program to repay any borrowed funds. With the exception of years involving catastrophes, annual losses and receipts have generally evened out since 1978, but NFIP has been unable to repay the amounts borrowed in response to catastrophic events, such as Hurricane Katrina in 2005. This funding structure is one of the reasons the program was added to our High Risk List in 2006. Increasing amounts of future spending may be required or expected based both on recent trends and events—and on the government’s response to those events. For example, weather-related events have cost the nation tens of billions of dollars in damages over the past decade. The exposure from weather-related events increases with changes in population density as well as increased frequency and severity of the events. This is one reason we added the federal government’s fiscal exposure created by climate change to our 2013 High Risk List. Through federal programs like flood and crop insurance, these events pose significant financial risks for the federal government. Questions, GAO-01-199SP (Washington, D.C.: Jan. 2001). the subsidized premiums be eliminated and actuarially sound premiums phased in. 20GAO, Federal Trust and Other Earmarked Funds: Answers to Frequently Asked 21The Biggert-Waters Flood Insurance Reform Act of 2012, Pub. L. No. 112-141, requires 22GAO, GAO’s High Risk Program, GAO-06-497T, (Washington, D.C.: March 15, 2006). 23GAO-13-283. Given the variation in fiscal exposures, when making budget decisions, a uniform, across-the-board approach to make fiscal exposures more apparent may not be appropriate. Several factors need to be taken into account in selecting an approach to better recognize fiscal exposures in the budget: the extent of the government’s legal commitment; the length of time until the resulting payment is made; and the extent to which the magnitude of the exposure can be reasonably estimated. We previously recommended to OMB and Congress two general approaches: reporting supplemental information in budget documents to increase attention to fiscal exposures, and incorporating costs into primary budget data to allow for better comparisons among programs. Supplemental reporting involves including information about the financial state of programs in addition to that which is available in primary budget data. Improved supplemental reporting in budget documents on fiscal exposures would make information more accessible to policymakers without introducing additional complexity and uncertainty directly into the budget. With a supplemental reporting approach, the current basis of reporting primary budget data would not be changed. Instead, the supplemental information would be used along with budget data to identify important signals that could be used to monitor fiscal exposures. For example, the Appendix Volume in the President’s budget includes a balance sheet for some federal insurance programs that shows the program’s net position (assets minus liabilities), providing valuable information about the program’s affect on future budget resources. For the Federal Crop Insurance budget account, a supplemental table includes the cost of the premium subsidy provided by the government. In 2003, we recommended that OMB create an annual report on fiscal exposures providing a concise list and description of such exposures, cost estimates (where possible), and an assessment of methodologies and data used to produce cost estimates for such exposures. We further recommended that OMB report estimated costs of certain exposures as a new budget concept—”exposure level”—as a notational item in the Program and Financing schedule of the President’s budget, though the recommendation was not implemented. In some cases, improving supplemental reporting in key budget documents may simply be a matter of expanding program analysis and making existing analytical work more readily available. For example, CBO regularly prepares 10-year budget projections for several programs in our review. While this time period is helpful for many programs, longer-term projections would be helpful for those programs—such as federal employee pensions and retiree health—where the time between the legal commitment and payments can extend over several decades. Further, significant changes in expected spending (or differences between actual and estimated spending) should prompt analysis, including consideration of the nature and source of the change. Identifying meaningful measures that would provide signals about the changing nature or magnitude of the exposure might offer another means to focus policymaker attention on fiscal exposures. For example, trust fund balances can serve as a signaling function for decision makers about underlying fiscal imbalances in covered programs. A gap between the projected fund balance and expected spending can signal that the fund, either by design or because of changes in circumstances, is collecting insufficient monies to finance future payments. Also, the repeated use of borrowing authority could be an indicator that the magnitude of the exposure has changed. Specifically, NFIP was expected to borrow in “above-average claim” years and repay that borrowing in “below-average claim” years. In years with extraordinarily high claims resulting from catastrophes like Superstorm Sandy and Hurricane Katrina, NFIP has used the borrowing authority repeatedly. The program currently has an outstanding debt to the Treasury of $24 billion. The program’s inability to cover claims in excess of premiums led to the passage of legislation in 2012 that altered the design of the program, including a key provision that NFIP raise premium rates to reflect true flood risk and make the program more financially stable. These signaling devices can provide policymakers with information regarding the full costs involved in budget decisions and enable those concerned about exposures to raise questions and challenges in the budget debate and to prompt action. 24GAO-01-199SP. 25Borrowing authority permits an agency to borrow money, usually from the Treasury, then 26Biggert-Waters Flood Insurance Reform Act of 2012, Pub. L. No. 112-141, § §100205, obligate against amounts borrowed. 100207. A second potential approach to improving recognition of fiscal exposures is incorporating full costs into primary budget data. This method would allow for better comparisons among competing priorities and between different methods for achieving program goals. In certain cases, including credit programs and government employee pensions, accrual accounting methods are already used to some extent in budgeting. For example, credit reform in 1990 permitted more accurate comparisons among direct loans, loan guarantees, and other types of tools for achieving a program’s objectives. Some other countries, such as Canada and Iceland, also use accrual budgeting selectively to increase recognition of future cash requirements related to service provided during the year; officials from these countries generally said that accrual budgeting contributed to improved resource allocation and program management decisions in these specific areas. We have recommended that Congress expand the use of accrual budgeting to other budget program areas where it would enhance upfront control, such as insurance and environmental liabilities. While it has not been expanded to these areas, proposals addressing broader budget process reform have included accrual budgeting. As seen in the implementation of credit reform, including estimates of full costs in primary data may also lead to the development of better estimates of future spending, though it would require the development of methodologies appropriate to the nature of the exposure. Because the cost to the government varies according to the specific program’s design and characteristics, different types of cost estimates could be incorporated into primary budget data in order to better recognize the government’s fiscal exposure. For example, Normal cost: For retirement benefit programs, such as pension or retiree health, the normal cost is the actuarial present value of the benefits to be paid in the future that are attributable to employees’ current year of service. As such, it is one measure of an accrual cost for a particular year. 27GAO-08-206. Other countries we reviewed—Australia, New Zealand, the Netherlands, and United Kingdom—use accrual budgeting more extensively to support broader efforts to improve the efficiency and performance of the public sector. However, none of these other countries used accrual budgeting for social insurance programs. Risk-assumed cost estimates: For insurance programs, key information is whether premiums will be sufficient to pay for covered losses under existing policies. The portion of risk assumed by the government that is not charged to the beneficiary—the “missing premium” or subsidy cost—is essentially the difference between some measure of the full premium and the actual premium charged to the insured. This cost measure has been used since 1991 for credit programs. All estimates of future spending introduce some degree of additional uncertainty into the budget and the ease of implementation differs. Some measures may already be used widely in other forms of reporting, whereas others are relatively new concepts for federal budget reporting and may involve developing new models and technical skills. Despite any implementation challenges, approximate estimates of the full cost to government may be preferable to some current measures that are incomplete or potentially misleading. Further, a requirement to produce estimates for budget reporting may help improve the quality of estimates by drawing more attention to them. Although using estimates may introduce uncertainty in primary budget data, it would result in earlier cost recognition in the budget. This would help reinforce up-front controls in the budget process. 28The risk assumed can be difficult to measure for many federal activities, but is most clearly identifiable in federal insurance programs. facing the federal government—would be an important first step toward enhancing control and oversight over federal resources and can aid in monitoring the financial condition of programs over the longer term. Incorporating measures of the full cost into primary budget data would provide enhanced control over future spending. This can both improve the nation’s fiscal condition and enhance the budgetary flexibility for responding to unexpected or emerging challenges. We provided a draft of this report to the Office of Management and Budget and the agencies responsible for administering the programs we reviewed. Those agencies are the Risk Management Agency (RMA) in the Department of Agriculture; the Federal Housing and Finance Administration; Department of Defense (DOD); Department of Homeland Security (DHS); Department of Veterans Affairs (VA); the Office of Personnel Management; and the Pension Benefit Guaranty Corporation (PBGC). RMA, DOD, PBGC, and the Federal Emergency Management Agency in DHS provided technical comments that were incorporated in the report as appropriate. The Chief of Staff of VA provided comments that are discussed below and reprinted in appendix II. VA generally agreed with our findings and stated that current budget reporting provides adequate information for determining the resources necessary to pay benefits for current veterans and survivors. While we agree that the budget records the outlays associated with benefits paid in a given year, we maintain that current budget reporting provides incomplete information about the costs incurred for benefits earned during the year that will be paid in the future. We are sending this report to relevant agencies and congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report please contact me at (202) 512-6806 or irvings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Farming has always been vulnerable to risks from natural causes such as drought, excessive moisture, hail, wind, frost, insects, and disease. Farmers are also exposed to financial losses from price risk. The federal government helps mitigate the impact of these risks on farm income through the federal crop insurance program. Crop insurance is sold and serviced by 17 approved private companies. Under the provisions of these agreements, the private companies both bear a percentage of any loss and reap a percentage of any gain associated with the policies over the course of the year. The federal government subsidizes the premiums paid by farmers and acts as primary reinsurer for the private companies that underwrite the policies. The federal government also makes payments to insurance companies that are intended to cover administrative expenses for selling and servicing crop insurance policies. Federal crop insurance is not intended to be self-financing through premiums; a permanent, indefinite appropriation covers any payments that are needed to cover losses and other expenses. The government’s legal commitment to pay crop insurance policyholder claims when losses occur makes the program an explicit exposure. The amount of federal spending resulting from this exposure depends on the extent of losses incurred by farmers. The federal crop insurance program’s costs increased sharply in recent years. An indicator of the government’s exposure is the increase in appropriations drawn from the general fund,Figure 3 shows outlays and resources available to the program, the combination of the general revenue funds, and collections (primarily premiums) received. Outlays, which reflect claims payments and other associated costs, doubled from $3 billion in 2001 to $6 billion in 2008, and then doubled again in 2009 to $12 billion. Premiums received do not fully cover the program’s costs; general revenue funds make up more than half of the program’s available resources and are used in part to provide premium and administrative expense subsidies. Therefore, the full cost of the crop insurance subsidies and administrative costs is reflected in the budget. Anticipating the future budget exposure presented by crop insurance is complicated by the potential impact of climate change and changes in program design. Our March 2007 report assessing the financial risks to the Federal Crop Insurance Corporation (FCIC) found that its exposure to weather related losses had grown substantially. At that time, we found that little had been done to develop the kind of information needed to understand FCIC’s long-term exposure to climate change and that FCIC had not analyzed the potential impacts of an increase in the frequency or the severity of weather-related events on its operations. Since then, a study examining the potential impact of climate change on the federal crop insurance program was prepared for the Risk Management Agency (RMA), which administers the program. According to RMA officials, the overall impact of climate change is not clear, given the uncertainty of various climate change scenarios and the potential adaptive responses by growers and the crop insurance program. Reflecting these uncertainties and the significance of this issue, we included a discussion of the crop insurance program in a high risk designation related to the fiscal exposure of climate change in our High Risk List in 2013. The Federal Emergency Management Agency (FEMA) within the Department of Homeland Security operates the National Flood Insurance Program (NFIP) to provide flood insurance to residential and commercial property owners. Flood insurance is funded primarily through premium collections. The premiums were designed to permit the program to cover losses and expenses in an “average historical loss year,” but not to cover high-loss years. Instead, the program has statutory authority to borrow from the Treasury in such years with the expectation that low-loss years would allow the program to repay any borrowed funds. While not always the case, since 1978 annual losses have generally evened out with receipts over normal years but borrowing authority has been accessed and increased in recent years, in response to catastrophic events such as Hurricane Katrina in 2005. NFIP presents a range of exposures to the federal government. The government’s legal commitment to pay flood insurance policyholder claims when losses occur makes the program an explicit exposure. The amount of federal spending resulting from this exposure depends, in part, on the frequency of weather-related events and their severity. If total claims exceed amounts available from premium collections, NFIP may access available borrowing authority to cover excess claim amounts. To the extent there is an expectation that the federal government cover claims exceeding the amount that NFIP has been authorized to borrow from the Treasury, NFIP represents an implicit exposure. NFIP received increased borrowing authority in 2013 to cover loss claims related to Superstorm Sandy. Including the $16.8 billion NFIP borrowed to cover claims primarily from Katrina, the program’s reported total outstanding debt had increased to $24 billion as of March 31, 2013. One measure of the government’s exposure is the extent to which claims exceed available resources. One indicator of the exposure is NFIP’s net position—the residual difference between the program’s total assets and liabilities. Figure 4 also shows how the program is reflected in primary budget data. NFIP is reflected on a cash basis: premium collections are recorded as receipts in the year they are received and claims payments are recorded as outlays in the year they are made. The balance sheet for NFIP is also included in the budget as supplementary information. However, the budget has not reflected the subsidy cost—or “missing premium”— embedded in NFIP’s program design. The subsidy can be measured as the portion of risk assumed by the government that is not charged to the beneficiary. NFIP was first added to our High Risk List in 2006 because of concerns about financial solvency. The program was highlighted again in 2013 when we added the fiscal exposure of climate change to the High Risk List. GAO-13-283. AECOM, The Impact of Climate Change and Population Growth on the National Flood Insurance Program Through 2100, (June 2013). 6The Biggert-Waters Flood Insurance Reform Act of 2012, Pub. L. No. 112-141. Congress recently repealed section 4005(c) of the Employee Retirement Investment Security Act, which provided authority for the PBGC’s federal line of credit up to $100 million. See the Moving Ahead for Progress in the 21st Century Act (MAP-21), Pub. L. No. 112-141, § 40234 (a). One measure of the magnitude of the government’s fiscal exposure is PBGC’s net position, which represents the residual difference between PBGC’s total assets and liabilities. As figure 5 shows, PBGC’s net position generally declined from a $7.8 billion surplus in 2001 to a $34 billion deficit in fiscal year 2012. However, PBGC estimated that its financial risk for potential termination of underfunded plans sponsored by financially weak firms to be almost $300 billion. Although PBGC’s annual receipts currently exceed its outlays, its overall financial position is weaker than would be indicated by these annual receipts and outlays. PBGC is an example of a program for which cash-based budgeting provides potentially misleading information. Premiums are shown as receipts when they come into the Treasury and payments are shown as outlays when they are made; the time lag between these events means that an increase in PBGC premiums would appear in the budget as an increase in revenues. The budget does not reflect the “missing premium”—the portion of a full risk-based premium not charged to the insured, which could be a signal of the expected cost of the program. In 2013, we reported that plan terminations and insolvencies threaten PBGC’s ability to pay pension guarantees for retirees. In the event that the agency were to exhaust all of its assets and become insolvent, the agency would only have premium revenue to rely on to make its benefit payments. For further discussion, see GAO, Private Pensions: Timely Action Needed to Address Impending Multiemployer Plan Insolvencies, GAO-13-240, (Washington, D.C.: Mar. 28, 2013). 11GAO, Pension Benefit Guaranty Corporation: Redesigned Premium Structure Could Better Align Rates With Risk From Plan Sponsors, GAO-13-58, (Washington, D.C.: Nov. 7, 2012) and Pension Benefit Guaranty Corporation, Excellence in Customer Service: Annual Report 2012, (Washington, D.C.: Nov.14, 2012). 12GAO-13-58. Congress established Fannie Mae and Freddie Mac as government- sponsored enterprises (GSEs) in the housing finance market to support the supply of mortgage loans; securities issued by these GSEs were not backed by the full faith and credit of the government. In 2008, in response to the financial crisis, Treasury entered into Senior Preferred Stock Purchase Agreements (the agreements) with Fannie Mae and Freddie Mac to preserve the assets and mitigate systemic risks that contributed to market instability. Under the agreements, Treasury would purchase these GSEs’ senior preferred stock and make funds available on a quarterly basis, to be recovered by redemption of the stock or by other means. While the initial funding commitment for each enterprise was capped at $100 billion, Treasury increased the cap to $200 billion per GSE in May 2009 to maintain confidence in these GSEs. In 2012, the caps were replaced with a formulaic cap allowing these GSEs to make quarterly draws based upon their net position, or if the liabilities of either GSE, individually, exceed its respective assets. The purchase agreements with Fannie Mae and Freddie Mac illustrate how an exposure can change over time. Prior to 2008, these GSEs represented an implicit fiscal exposure to the government because the securities they issued were explicitly not guaranteed by the full faith and credit of the U.S. government. The 2008 stock purchase agreements, while temporary, created a new explicit exposure for the federal government to provide immediate financial support to Fannie Mae and Freddie Mac. At the end of any quarter in which either Fannie Mae’s or Freddie Mac’s balance sheet reflects that total liabilities exceed total assets, the GSEs have 15 business days to request funds under the terms of the agreements. Treasury then has 60 days to provide the funds, as necessary, up to the maximum amount of the funding commitment. The federal government is not obligated to provide additional assistance beyond the scope of the agreements, but the government’s response may influence expectations related to future support. This expectation represents an implicit exposure. One measure of the magnitude of the exposure to the government is the liability reported in the 2012 Financial Report of the United States Government (Financial Report) of $9 billion, reflecting Treasury’s best estimate at the time of likely draws over the remaining duration of the agreements. The liability was significantly reduced in 2012 (see figure 6). This reduction was in part due to a revision to the agreements that is expected to reduce the amount of future draws, and also to the improved housing market, which contributed positively to the GSEs financial results. Another possible measure of the exposure is the remaining draw authority available to these GSEs, which was about $258 billion as of January 1, 2013. This represents the maximum amount of future spending under the current agreement. In considering the exposure under either measure, any amounts received related to the federal government’s investment in the GSEs’ stock—which had a reported fair value of $109.3 billion in the 2012 Financial Report— would reduce the federal government’s exposure. Figure 6 also shows the Treasury payments to these GSEs and draw authority balance at the end of the year as recorded in primary budget data. The Administration considers Fannie Mae and Freddie Mac to be outside of the budget; therefore payments to them are recorded as outlays. Since 2008, Fannie Mae and Freddie Mac have drawn a total of $187.5 billion and draws have decreased each year—from a high of $95.6 billion in fiscal year 2009 to $18.5 billion in fiscal year 2012. The draw authority balance remaining at the end of each fiscal year is reflected as an unobligated balance. Under the terms of the agreements, the cost of the government’s commitment is offset by dividend payments from these GSEs to Treasury.which record associated costs in the federal budget. The future structures of these two GSEs and the roles they will serve in the mortgage market must still be determined and will affect future decisions about budget treatment. 15Pub. L. No. 101-508. Under the Federal Credit Reform Act of 1990, the credit subsidy cost of direct loans and loan guarantees is the net present value of the estimated long- term cost to the government at the time the credit is provided of such programs, less administrative expenses. The act was intended to improve disclosures about the risks associated with government direct loans and guarantee programs and assist Congress in making budget decisions about such programs. Most federal civilian employees are covered by one of two pension plans, depending largely on when they began their federal service: those hired before 1984 are covered by the Civil Service Retirement System (CSRS) while most employees who entered federal service after 1983 are covered by the Federal Employee Retirement System (FERS). FERS has a small defined benefit portion—the FERS annuity—which supplements Social Security, and a voluntary defined contribution portion, the Thrift Savings Plan (TSP).portion of FERS—which constitutes the government’s exposure arising from the two civilian pension plans. Both agency and employee contributions toward CSRS and the FERS defined benefit annuity are paid to the Civil Service Retirement and Disability Fund (CSRDF), the trust fund in the budget dedicated to funding civilian pension benefits. The CSRDF also receives some payments from the Treasury’s general fund account (largely intended to amortize and pay interest on the previously accumulated CSRS liability) and the CSRDF is credited with interest (investment income) on the Treasury securities it holds. According to the 2012 Financial Report, other significant civilian pension plans include those of the Coast Guard, Foreign Service, Tennessee Valley Authority, U.S. Postal Service, and the Department of Health and Human Services. more CSRS employees retire and the vast majority of the civilian workforce is covered by FERS.contributions for FERS employees are based on annual calculations of the amount required to fully fund future benefits (normal cost) less the employee contribution rate. Employee contributions are calculated differently for FERS than for CSRS: FERS employees essentially pay the difference between the CSRS employee contribution rate of 7 percent and the Social Security employee payroll tax rate, currently 6.2 percent, for an employee contribution rate of 0.8 percent. However, in 2012, the employee contribution was increased to 3.1 percent of pay for employees hired after December 31, 2012, effectively reducing the agency contribution. Benefit payments for both CSRS and the FERS defined benefit annuity are made from the CSRDF, which has permanent, indefinite budget authority to pay benefits. Agencies pre-fund employee pensions by deferring some of their budget authority to the CSRDF, where it remains available to pay pensions to retired workers. The Treasury credits the CSRDF with budget authority in the form of special-issue securities that are backed by the full faith and credit of the U.S. government that earn interest equal to the average rate on the Treasury’s outstanding long-term debt. These securities are then redeemed to make payments to retirees and survivors. Employee contributions to the CSRDF are recorded as receipts and count as revenue to the Treasury. Payments to retirees are recorded as outlays and affect the government-wide deficit. Agency contributions are made from their appropriations and so payments toward the normal cost are visible to the individual agency, but are not reflected in the government- wide deficit because agency payments to the CSRDF are intragovernmental—that is, they are recorded as outlays by one agency and receipts by the trust fund. Treasury also makes annual payments to the CSRDF from the general fund to amortize the previously accumulated unfunded liability and to fund the difference between the CSRS employee and agency contributions and the amount that would be required to fully fund future CSRS benefits. Since no cash actually leaves the government, both agency contributions and Treasury payments to the CSRDF do not affect the government-wide deficit. While the long-term exposure to the federal budget arising from civilian pension benefits is large, it is on a path to decline over time. This trend reflects both the transition from CSRS to FERS in the mid 1980s and the change to a defined contribution design and funding structure in FERS. Civilian annuitants are generally eligible to receive continued subsidized health benefits such as those received over the course of their working years. The Federal Employee Health Benefits program, implemented in 1960, is operated through two revolving trust funds: the Employees Health Benefits Fund and the Retired Employees Health Benefits Fund. The two funds are reported jointly as the Federal Employee Health Benefits (FEHB) account in the federal budget. Under the FEHB program, the federal government pays a share of the monthly premium. The government shares the cost of the monthly premium with employees. The government’s legal commitment to pay a share of the monthly premium for eligible retirees makes civilian post-retirement health benefits an explicit exposure. The magnitude of the exposure can be estimated by the accrued liability for civilian post-retirement health benefits, which at the end of fiscal year 2012, was estimated to be $328 billion. The liability is an estimate of the government’s future cost of providing post-retirement health benefits to current employees and retirees. Figure 8 shows the exposure generally grew through 2010 before declining in 2011 and 2012. As is the case with health care costs in general, estimates depend on many factors, including assumptions about utilization and health care costs far into the future and as a result, can be difficult to estimate. These variables in turn can vary with changes in covered services and participants’ choice of plans. 29After age 65, the retirees’ FEHB benefits are coordinated with Medicare benefits. 30The government’s share for annuitants and current employees is 72 percent of the weighted average of the premiums for all participating plans, with a cap of 75 percent of the total premium. The FEHB Program is classified as a mandatory program and is funded through a permanent indefinite appropriation. The budget reflects payments made in the current year for both active employees and retirees. Although active and retired employees pay premiums, they do not cover the full cost. Agencies make payments to the Employees Health Benefits Fund at OPM for their share of the FEHB premiums for current employees. However, with some exceptions, the budget does not reflect the estimated costs of future payments associated with the federal government’s active employees’ post-retirement health benefits. Federal health care spending as a whole has been growing faster than the economy and is expected to continue to do so. As such it will be important to find ways to minimize costs while maintaining quality for civilian employees and retirees. 31The U.S. Patent and Trademark Office is an exception; since 2005 it has made accrual payments to the FEHB fund associated with its active employees’ post-retirement health benefits. In addition, since 2006, the U.S. Postal Service is required to make scheduled prefunding contributions to the Postal Service Retiree Health Benefit Fund in the budget. However, the Postal Service has not made required payments of $11.1 billion due in fiscal year 2012. See GAO, U.S. Postal Service: Proposed Health Plan Could Improve Financial Condition, but Impact on Medicare and Other Issues Should Be Weighed Before Approval, GAO-13-658 (Washington, D.C., July 18, 2013) and U.S. Postal Service: Status, Financial Outlook, and Alternative Approaches to Fund Retiree Health Benefits, GAO-13-112 (Washington, D.C., Dec. 4, 2012). Members of the military are eligible for a defined benefit, noncontributory pension after 20 years of active service. Active duty personnel become eligible for retirement by completing 20 years of service, regardless of age. The military retirement system provides inflation-protected monthly compensation and other benefits after an active reserve or military career. The system does not provide for gradual vesting; service personnel who separate prior to completing the minimum 20 years of service generally receive no retirement benefits. reserve retired pay is not payable until age 60, with some exceptions. 32Members of the reserves may retire after 20 qualifying years of creditable service, but 33Since 2001, service members also have been eligible to participate in the federal Thrift 34The military retirement system applies to members of the Army, Navy, Marine Corps and Savings Plan, the defined-contribution plan available to civilian employees, although generally without any matching contributions from the government. Air Force. Most of the provisions also apply to retirement systems for members of the Coast Guard (administered by the Department of Homeland Security), officers of the Public Health Service (administered by the Department of Health and Human Services), and officers of the National Oceanic and Atmospheric Administration (administered by the Department of Commerce). 35Pub. L. No. 98-94 provided for accrual funding of the military retirement system and for the establishment of a Department of Defense Military Retirement Fund in 1985. payments to those receiving both military retired pay and disability compensation paid by the Department of Veterans Affairs (VA). The government’s legal commitment to pay benefits to those retirees who reach 20 years of active service makes military pension benefits an explicit exposure. From a government-wide perspective, the magnitude of the exposure can be estimated by the accrued liability for military pensions, which at the end of fiscal year 2012 was estimated to be $1.5 trillion, up from $708 billion in 2001 (see figure 9). 36Concurrent receipt refers to the simultaneous receipt of military retired pay and VA disability compensation. Prior to 2004, the law required that military retired pay be reduced dollar-for-dollar by the amount of any VA disability compensation received (e.g. an offset). The 2004 National Defense Authorization Act (Pub. L. No. 108-136) authorized concurrent receipt of both amounts without a required offset for certain military retirees. Figure 9 also shows the Military Retirement Fund’s outlays and receipts as reported in primary budget data. Outlays reflect benefit payments to current retirees, which totaled $49 billion in 2012, up from $34 billion in 2001. Receipts to the trust fund capture the intragovernmental contributions from the services to fund future benefits earned today, Treasury general fund payments, and investment income on the Treasury Securities the fund holds. The Military Retirement Fund had assets of $376 billion as of September 30, 2012. Beginning in 1985, military pension costs have been partially visible to DOD since it makes contributions to cover the costs as they accrue, but the normal cost is not reflected in the unified budget because the payments are intragovernmental—that is, they are recorded as outlays by one agency and receipts by the trust fund. Since no cash leaves the government, there is no effect on the government-wide deficit. DOD’s military compensation system, including pension benefits, is an important tool to attract and retain the number and quality of active duty servicemembers it needs to fulfill its mission. Comprehensive information about the total cost of compensation, including benefits earned today that will be paid in the future, is important for future decisions about military compensation in a constrained fiscal environment. TRICARE is DOD’s managed health care system for active duty and retired uniformed service members and their families. TRICARE consists of multiple plan options, a number of which cover active duty personnel, their dependents, and retirees under age 65. Prior to 2001, TRICARE beneficiaries would lose their TRICARE coverage when they reached age 65, and Medicare—the federal health insurance program that provides medical benefits to elderly and disabled Americans—would become their primary health insurer. However, in 2001, the Congress expanded TRICARE by authorizing the continued provision of TRICARE benefits after age 65. The program is known as TRICARE for Life (TFL). TFL provides supplementary health care coverage for TRICARE beneficiaries who are eligible for Medicare (e.g. those aged 65 or older) and pays for many services that Medicare only partially covers. Non- Medicare-eligible military retirees are not eligible for TFL and are covered by one of the other TRICARE programs. Corps, U.S. Coast Guard, the Commissioned Corps of the Public Health Service, and the Commissioned Corps of the National Oceanic and Atmospheric Association. through several TRICARE options. 37The uniformed services include the U.S. Army, U.S. Air Force, U.S. Navy, U.S. Marine 38DOD provides health care benefits to its non-Medicare-eligible beneficiary population 39The National Defense Authorization Act for Fiscal Year 2001, Pub. L. No. 106-398 40While TRICARE beneficiaries over age 65 do not have to pay for their TFL coverage, established the Department of Defense Medicare-Eligible Retiree Health Care Fund, administered by the Secretary of the Treasury. they must be eligible for Medicare Part A and elect to carry Medicare Part B. Retired TRICARE beneficiaries are required to pay premiums for Medicare Part B, which covers certain physician, outpatient hospital, laboratory, and other services. TFL covers out-of- pocket costs incurred by beneficiaries for care over the sum of the amount paid for under Medicare. fund was started in reaction to rapidly rising health care costs. The Medicare-Eligible Retiree Health Care Fund (MERHCF) is financed through annual transfers from DOD for the future health care benefits earned by active military in that year (i.e., normal cost contributions); payments from the Treasury general fund toward the previously accumulated unfunded liability for past service; and any gains on investment income from Treasury Securities the fund holds. The government’s legal commitment to provide post-retirement medical benefits to eligible retirees makes the benefits an explicit exposure. From a government-wide perspective, the magnitude of the exposure can be estimated by the accrued liability for military post-retirement health benefits, which at the end of fiscal year 2012 was estimated to be $833 billion. The retiree health liabilities for military personnel, shown in figure 10 as the accrued liability, represents the estimated total cost of benefits earned to date for both non-Medicare eligible and Medicare-eligible retirees, as well as a portion of future benefits for those in active military service. Figure 10 also shows the balance of the MERHCF, which had assets of $176 billion at the end of fiscal year 2012. Beginning in 2003, current costs for Medicare-eligible military retirement benefits have been visible to DOD, since it makes contributions to cover the costs for those benefits as they are earned by current servicemembers, and the uniformed services reflect these normal cost contributions in their budgets. The Treasury also deposits funds towards the unfunded liability. Since no cash actually leaves the government from these contributions to the MERHCF, there is no effect on the government- wide cash deficit. Assets accumulating in the MERHCF are only used to pay benefits for Medicare-eligible retirees, and there is an effect on the government-wide deficit as benefits are paid from the MERHCF. In contrast, the cost of pre-Medicare-eligible post-retirement health benefits is not reflected in the uniformed services’ budget data as these benefits are earned. Rather, the cost of pre-Medicare-eligible health care is paid for on a cash basis from DOD’s annual Operations and Maintenance appropriation. We have highlighted a range of long-standing issues surrounding DOD’s Medical Healthcare System. As health care consumes an increasingly larger portion of the defense budget, DOD leadership has recognized the need to reduce duplication and overhead to operate the most efficient health system possible. In 2012, we reported that DOD had identified initiatives aimed at slowing its rising health care costs, but its ability to implement and monitor these initiatives and achieve related costs savings is limited. 41GAO, Defense Health Care: Applying Key Management Practices Should Help Achieve Efficiencies within the Military Health System, GAO-12-224 (Washington, D.C.: Apr. 12, 2012). The federal government provides benefits to eligible veterans and their survivors to compensate for the loss of potential earnings due to service- connected disability or death. Entitlement to compensation depends on the veteran’s disabilities having been incurred in, or aggravated during, active military service; death while on duty; or death resulting from service-connected disabilities. These benefits can be in place of (or in combination with) the DOD military retired pay. The government’s legal commitment to provide compensation to eligible veterans makes veterans compensation payments an explicit exposure. The magnitude of the exposure can be estimated by the accrued liability, which at the end of fiscal year 2012 was estimated to be $1.8 trillion. This measure reflects the present value of expected future payments to current veterans already receiving compensation payments, to veterans who are not currently receiving compensation but will in the future, and to a portion of those in active military service assumed by the VA to become eligible for compensation in the future. Figure 11 illustrates the growth in this exposure, which more than doubled between 2001 and 2012—from $692 billion to $1.8 trillion—and represents the fastest rate of growth among the compensation programs we examined in this report. The annual growth of the accrued liability, which averaged about $97 billion from fiscal years 2001 to 2012, reflects increases in the number of veterans as a result of wars and other conflicts, the aging of the veteran population, and changes in the benefits and services provided to veterans. payable for a veteran who, at the time of death, is qualified to receive compensation or pension, or whose death occurred in a VA facility. 42Burial benefits are also provided and include a burial and plot or internment allowance 43Prior to 2004, the law required that military retired pay be reduced dollar-for-dollar by the amount of any VA disability compensation received (e.g. an offset). The 2004 National Defense Authorization Act (Pub. L. No.108-136) authorized actual concurrent receipt, or the simultaneous receipt of military retired pay and VA disability compensation for certain military retirees. Veterans Compensation and Pensions annually receives no-year funds through regular appropriations. Figure 11 also shows the program’s annual outlays as reported in primary budget data. Outlays, which increased from $21 billion in fiscal year 2001 to $55 billion in fiscal year 2012, reflect payments made to current veterans. However, the budget does not reflect the estimated costs of future payments earned by current service. The VA’s Veterans Compensation and Pensions account funds one of the largest federal disability programs. In the years ahead, enrollment in the VA’s disability compensation program could increase given the conflicts in Iraq and Afghanistan and as more Vietnam veterans—a significant portion of the total veteran population—further age into disability-prone years. These trends directly affect the extent and magnitude of the government’s fiscal exposure arising from veterans compensation. Further, since 2003, veterans compensation and other federal disability programs have been on our High Risk List, due in part to challenges agencies face in keeping their criteria for evaluating disability and determining compensation consistent with advances in medicine, technology, and changes in the labor market and society. Modernization, GAO-12-846 (Washington, D.C.: Sept. 10, 2012). 44GAO, VA Disability Compensation: Action Needed to Address Hurdles Facing Program 45GAO-13-283. In addition to the contacts named above, Melissa Wolf (Assistant Director), Margaret McKenna Adams, Dean Campbell, Darryl Chang, Jeremy Choi, Robert F. Dacey, Felicia Lopez, Donna Miller, Susan Offutt, Frank Todisco, and Katherine Wulff made key contributions to this report. | The federal government's long-term fiscal imbalances are driven on the spending side by the effects of an aging population and rising health care costs on Social Security and major federal health programs. However, GAO identified a variety of other fiscal exposures--responsibilities, programs, and activities that may legally commit or create the expectation for future federal spending--that vary as to source, extent of the government's legal commitment, and magnitude. A more complete understanding of these other fiscal exposures can help policymakers anticipate changes in future spending and enhance control and oversight over federal resources. GAO was asked to provide information on risks facing the federal budget. This report (1) examines selected programs that create a fiscal exposure, including the extent and estimated magnitude of the government's legal commitment; and (2) assesses how fiscal exposures could be better recognized in the budget. Based on its review of budget and financial data, GAO selected nine programs, including federal employee benefit programs, insurance programs, and the stock purchase agreements with Fannie Mae and Freddie Mac, and drew upon previous work to discuss potential approaches for improving budgetary attention to fiscal exposures. Fiscal exposures may be explicit in that the federal government is legally required to pay for the commitment; alternatively, it may be implicit in that the exposure arises from expectations based on current policy or past practices. The nine programs GAO examined illustrate the range of federal fiscal exposures (see figure) and how they can change over time. Also, some programs may have elements of both explicit and implicit exposure. Federal insurance programs, for example, fall across the spectrum: if an event occurs, some payment is legally required--an explicit exposure. However, there may be an expectation that the government will provide assistance beyond the amount legally required--that is an implicit exposure. Prior to 2008, securities issued by Fannie Mae or Freddie Mac were explicitly not backed by the U.S. government. However, in response to the financial crisis, the government's agreement to provide temporary assistance to cover their losses up to a set amount created a new explicit exposure. The amount of future spending arising from federal fiscal exposures varies in the degree to which it is known and can be measured. For some exposures GAO found that the budget provided incomplete information or potentially misleading signals regarding the full cost of the commitments made today. A uniform across-the-board approach to make fiscal exposures more apparent when making budget decisions may not be appropriate given their varying characteristics. Several factors need to be taken into account in selecting an approach to better recognize fiscal exposures in the budget: the extent of the government's legal commitment; the length of time until the resulting payment is made; and the extent to which the magnitude of the exposure can be reasonably estimated. Expanding the availability and use of supplemental information, including measures that can signal significant changes in the magnitude of fiscal exposures, would be an important first step to enhancing oversight over federal resources and can aid in monitoring the financial condition of programs over the longer term. Incorporating measures of the full cost into primary budget data would provide enhanced control over future spending, which can help both improve the nation's fiscal condition and enhance budgetary flexibility. GAO is not making new recommendations but this analysis provides additional support for past recommendations to improve budget recognition of fiscal exposures by, for example, expanding the availability and use of information on expected future spending arising from commitments made today. |
Trade adjustment assistance programs provide federal assistance to dislocated workers, firms, and communities. Economic adjustment assistance programs are also available for distressed communities, regardless of what has caused the adverse economic condition. The TAA and NAFTA-TAA programs assist U.S. workers displaced by foreign trade and increased imports. The current TAA program was created by the Trade Expansion Act of 1962 (P.L. 87-794). It was substantially modified by the Trade Act of 1974 (P.L. 93-618) and the North American Free Trade Agreement Implementation Act of 1993 (P.L. 103- 182). The TAA program covers workers who lose their jobs because of imports from any country, while the NAFTA-TAA program covers only workers who have lost their jobs because of increased imports from or shift of production to Mexico or Canada. These programs provide benefits such as trade readjustment allowances (extended income support beyond normal unemployment insurance benefits), services such as job training, and allowances for job search and relocation. The Department of Labor administers both programs and makes determinations regarding worker group eligibility. Groups of workers or their representatives can petition the Department of Labor for certification of eligibility to apply for services or benefits under the program. The Department then conducts an investigation to determine if increased imports or a shift in production to Canada or Mexico have contributed to their loss of employment. Once a TAA or NAFTA-TAA petition is approved covered workers must meet several tests regarding the timing of their layoff and their length of employment with the trade-impacted firm. Workers can be certified as eligible for both programs but can claim benefits from only one. The states play a major role by providing program services and benefits, such as job training and reemployment services. The TAA and NAFTA-TAA programs together received about $407 million in fiscal year 2001 funding. Generally, TAA and NAFTA-TAA income assistance for a dislocated worker is equal to the weekly benefits of the state’s unemployment insurance program and may be paid for up to 52 weeks after the initial 26 weeks (30 weeks in Massachusetts and Washington State) of unemployment insurance benefits have been exhausted. Thus, eligible dislocated workers may receive up to 78 weeks (18 months) of cash payments if enrolled in approved training. Dislocated workers also are eligible for up to 104 weeks (2 years) of training. Therefore, workers do not necessarily receive income assistance during their entire period of training. The process by which workers receive assistance is often triggered when a company gives 60 days’ notice of plant closure or layoffs. Generally, a state Rapid Response Team, comprised of employment service officials, meets with plant managers to obtain information about the prospective layoff or closure and the profile of the affected workers. If appropriate, the Rapid Response Team will suggest that the company apply for TAA or NAFTA- TAA certification, so that workers can receive these benefits after separation. The Rapid Response Team generally returns to the plant after the layoff or closure announcement to give workers information about available services. In some states, training providers may join the team. After separation, the dislocated workers can receive job placement assistance, and if suitable employment is not found, they can enroll in training. TAA and NAFTA-TAA provide extended income support and pay for training, within certain time limits and restrictions. Figure 1 illustrates the timeline for receipt of TAA and NAFTA-TAA benefits. The federal government has also established programs to assist trade- impacted firms and communities suffering job losses due to changing trade patterns. The TAA program for firms, established in 1962 and administered by the Department of Commerce’s Economic Development Administration (EDA), provides assistance to firms that can demonstrate that increases in imports have contributed importantly to layoffs and declines in sales or production. The TAA for firms program was funded for $10.5 million in fiscal year 2000. The Community Adjustment and Investment Program was established as a result of the 1993 North American Free Trade Agreement Implementation Act. Under the program, loan guarantees, loans, and grants are provided to businesses and grantees in eligible counties to help stimulate private sector employment and growth. An interagency committee chaired by the Department of the Treasury administers this program. Loan guarantees to local businesses have accounted for the preponderance of financing commitments to date. The program was established with an initial capitalization of $22.5 million and has received $20 million in additional appropriations to support and expand program activities. The Congress did not appropriate any additional funds for the program in fiscal year 2001. The federal government also offers assistance programs for any distressed community, regardless of whether the economic problems are trade related, through EDA. EDA’s mission is to help generate jobs; retain existing jobs; and stimulate industrial, technological, and commercial growth in economically distressed areas. EDA assistance is generally available to rural and urban areas of the nation experiencing high unemployment, low income, and severe economic distress. In addition to EDA assistance, a variety of other federal programs provide assistance to economically distressed areas. The six communities we selected for our case studies varied in size, demographics, and location, but they all had experienced major trade- related industry and job losses in the mid- to late-1990s. The firms that these communities lost represent a cross section of the types of industries that have been affected by increased trade, as shown in figure 2. Although many layoffs in these communities occurred gradually as companies downsized to remain competitive with offshore producers, each community experienced at least one major plant closure that had an immediate impact on the local workforce and economy. For example, Tultex, an apparel manufacturer and one of the largest employers in Martinsville, Va., declared bankruptcy in December 1999 without prior notice and immediately closed all operations, leaving more than 2,000 workers without jobs. Likewise, in December 1996, Coushatta, La., lost 500 jobs when the Sunbeam small appliance plant, the city’s only manufacturing concern, closed as part of a corporate restructuring that moved many production operations offshore. Table 1 shows the total number of workers certified under the TAA program when threatened by job loss in these communities, as well as the numbers certified in major layoffs. Trade-related plant closures and mass layoffs had serious economic impacts on these communities. In the short term, their unemployment rates rose dramatically. For example, as a result of the Tultex closure in Martinsville, the city’s unemployment rate went from about 9 percent to almost 20 percent in 1999. Similarly, in Coushatta, the closing of the Sunbeam manufacturing plant in 1996 caused the city’s unemployment rate to rise to almost 24 percent in 1997. Where the layoffs were particularly severe, the communities were not prepared to deal with the workers’ immediate needs, and local social service agencies were overwhelmed with requests for assistance. In Martinsville, many displaced Tultex workers were not eligible for assistance, such as food stamps, because they owned vehicles and other assets and could not meet requirements for assistance. These workers, who had also lost medical coverage as a result of the Tultex bankruptcy, were forced to find employment or seek assistance from charitable organizations such as the Salvation Army. Plant closures and layoffs also impacted local government revenues and hurt local businesses. Government officials in some communities said that they lost significant business tax revenues when companies abandoned their plants. For example, according to Martinsville officials, as a result of its bankruptcy, Tultex defaulted on more than $1 million in property taxes owed in 1999, and the city will continue to lose tax revenue from the property. In addition, Martinsville was forced to raise water and sewer service rates to compensate for the $1.4 million Tultex had paid annually for these services. The loss of income suffered by displaced workers also affected local businesses. Coushatta officials said that Sunbeam’s $10 million annual payroll represented about one-fifth of Red River Parish’s gross income. Such losses of income had a negative impact on retail sales in trade-impacted communities. Businesses that supplied or subcontracted for plants that closed also felt the impact. For example, in Martinsville, a plant that generated steam for the Tultex factory was forced to close after Tultex went bankrupt. Many dislocated workers found new jobs in their area, but most were paid lower wages, according to community officials. In Owosso, Mich., community officials said that many workers who lost their jobs in the trade-impacted automobile accessory industry eventually found lower- paying service sector jobs. In Washington, N.C., some workers who were displaced when Hamilton Beach/Proctor-Silex, Inc. moved its small appliance operation to Mexico found work in the local furniture-making industry, but at significantly lower wages. Similarly, in Coushatta, many former Sunbeam workers who were hired at a nearby compressor plant that had recently opened were paid less. In El Paso, the city had been successful in attracting some new manufacturing businesses, but many displaced apparel workers were not qualified for the jobs and either found employment in the service sector or remained unemployed. Based on available but incomplete Department of Labor data, nationally only 61.5 percent of dislocated workers who responded and entered new employment reported that their new jobs paid at least 80 percent of their old job’s wages. Some communities feared that the loss of relatively well-paying, but low- skilled jobs could lead to a decline in the standard of living for a large segment of the population. Food processing jobs in Watsonville, California, and apparel manufacturing jobs in El Paso, Texas, have been considered a means of upward mobility for recent immigrants with limited English skills. Workers who had these jobs were paid union-scale wages and received fringe benefits, which provided an opportunity to buy homes and send their children to college. Officials in Watsonville and Washington said that, until recently, young people could count on a factory job after high school where they might stay for most or all of their working lives. However, workers who have lost manufacturing jobs in these communities have limited prospects for obtaining new jobs with similar wages and benefits, since the jobs now available require higher skills or more education, according to community officials. Communities we visited were also concerned about the threat of additional trade-related layoffs and plant closures. Washington and Martinsville community leaders said that they expect their textile and apparel industries to continue to decline because of increased foreign competition. Martinsville leaders also fear that the furniture industry, another large employer in their community, will begin to feel the impact of increased furniture imports. Watsonville leaders also expected that the city would continue to lose food processing jobs as more companies shift operations to Mexico. The communities we visited had experienced serious adverse impacts from trade-related layoffs. The primary source of trade adjustment assistance they received came from Labor’s TAA and NAFTA-TAA programs for dislocated workers. TAA and NAFTA-TAA are widely available and can provide substantial assistance to trade-impacted dislocated workers through extended income support and training benefits. However, local program administrators told us that these programs have structural problems that hinder the effective delivery of services. The states also varied in how they implemented training benefits. Furthermore, it is difficult to evaluate the efficacy of the varied training or job placement approaches used across communities or states, because Labor’s outcome data for these programs are incomplete. The trade adjustment assistance that our case study communities received primarily came through the TAA and NAFTA-TAA programs for trade- impacted dislocated workers. These are entitlement programs, available to dislocated workers whose eligibility had been certified by Labor. TAA and NAFTA-TAA provide substantial assistance, primarily by supplying extended income support after unemployment insurance is exhausted, as well as training benefits. TAA and NAFTA-TAA program data show that the largest benefit delivered to displaced workers was in the form of extended income support, primarily for partial wage replacement while in training. As shown in table 2, the two programs paid a total of just over $66 million in income support to individuals in these communities over 6 years. Of this amount, almost $44 million was for “basic” allowances, or payments made in the 26 weeks after unemployment benefits are exhausted. About $22 million was for “additional” allowances made after basic allowances are exhausted, if the dislocated worker is in training or has a training waiver. Thus about 78 percent of TAA and NAFTA-TAA assistance that went to workers in the case study communities was used for income support. In these communities, training courses, which can last up to 104 weeks, cost $19 million over the 6-year period, which was about 22 percent of total funding. Forty-four percent of all program recipients in these communities enrolled in training. Total payments for income support and training was $85 million. Among the communities, El Paso had by far the highest number of recipients—8,581—and the highest total payments—at almost $70 million. Watsonville, which received $390,000 in assistance payments for 27 recipients, received the lowest amount of income support and had only 48 individuals enrolled in training. The effectiveness of the TAA and NAFTA-TAA programs is hampered by a number of problems in the way the programs have been structured, according to state program administrators and program participants. In the communities we visited, we consistently heard concerns expressed about problems related to inconsistency between the length of income support and training benefits, which resulted in hardships and increased numbers of dropouts; funding problems resulting in training delays; maintaining two separate trade-dislocated worker programs; and programmatic requirements that hinder efficient delivery of services and benefits to workers. Program administrators, training providers, and workers in training consistently said that the TAA and NAFTA-TAA programs needed to close the gap between extended income support payments, which are provided for up to 18 months, and training, which is provided for up to 24 months. Although there are mixed views and little data on the outcomes associated with shorter and longer training programs, as discussed below, the gap in income support is believed to create difficulties for workers in 2-year training programs because when income support payments stop, dislocated workers generally drop out of training because they cannot afford to remain in classes without financial assistance. The local administrator for the TAA and NAFTA-TAA programs in Washington, N.C., said that he and his staff advised workers to enroll in a course of study that would take no more than 18 months to complete, unless they had other sources of income. Dislocated workers we spoke with, most of whom were currently enrolled in training, frequently referred to workers who had to drop out of training due to financial constraints. Family responsibilities and the need to make mortgage or car loan payments had made it impossible for them to subsist without income support. Some workers we interviewed said they could afford to continue because they had a spouse who was working, but others said that they chose training that lasted 18 months or less. This choice precluded them from pursuing a 2-year Associate of Arts degree program, which could result in higher earnings or better skills, or any 2-year course of study involving initial remedial courses. Another problem cited by local program administrators with program structure was the lack of a stable funding stream for training benefits. Although training is a key part of a worker’s benefits, some states had difficulties providing consistent funding for training due to budgetary problems in the Department of Labor. The amount of funding the Department of Labor can provide is governed by legislatively set caps on training funds, with an annual limit of $80 million for TAA and $30 million for NAFTA-TAA. According to the Trade Act Coordinator in North Carolina, funding the trade programs has been an “administrative nightmare” and that state funds frequently must be used. Generally, federal funding is provided to states quarterly and is based on prior expenditures. Because TAA and NAFTA-TAA certifications fluctuate, in some cases, states may not have received sufficient funding to cover workers enrolled during a quarter. In addition, state and local officials reported that insufficient federal funds are available for the programs toward the end of the fiscal year (Department of Labor officials said these problems primarily occur in the first and last quarters of the fiscal year). High levels of certifications from unanticipated layoffs and plant closures have resulted in states—Texas and North Carolina, for example—with large numbers of workers enrolled in training at the same time that program officials were informed that no additional federal funds remained. As we noted in our recent report, although Labor has issued formal guidance that states should not stop enrolling workers in program services and benefits when funding is temporarily unavailable, agency officials report that few states have done so. In some cases, when federal training funds are depleted, states use Workforce Investment Act(WIA) or other state monies. According to officials from case study communities, dislocated workers were frequently enrolled in training under WIA and then shifted to TAA or NAFTA-TAA after Labor approved their petitions and certified them as eligible for benefits. A basic structural problem exists from maintaining separate TAA and NAFTA-TAA programs, resulting in inefficiency, problems in administration, and confusion, according to state program administrators and program participants. Officials in every state administrative office and case study community we visited stated that the certification and training enrollment procedures of the two programs are different. They claimed this hinders effective program administration—particularly because many workers are certified for benefits under both programs and must select one under which to take benefits. Officials said standardized requirements would make the program easier to administer. This reaffirms what state administrators from the 20 states with the largest TAA and NAFTA-TAA programs told us in 2000. Maintaining two separate trade programs, with differing timeframes and provisions for training waivers (available under TAA but not NAFTA-TAA) is confusing to both program administrators and dislocated workers. Dislocated workers in El Paso and Owosso stated that the explanations about the differences that were provided by program officials before they had to choose between programs were inadequate and did not sufficiently clarify their questions. Officials in every state administrative office and case study community we visited thus consistently supported the consolidation of the TAA and NAFTA-TAA programs. These officials believed that consolidation would simplify program administration and rules and would be more efficient. The Department of Labor agreed with this position in its letter of October 5, 2000, commenting on our evaluation of these programs, stating that it supported measures to harmonize the requirements of the two programs. Other program requirements can also impede dislocated workers from successfully completing training, according to state program administrators and program participants. For example, to obtain income support assistance, the NAFTA-TAA program generally requires that the dislocated worker enroll in training by the last day of the 16th week following their layoff or by the last day of the 6th week after publication of the certification, whichever is later. State program administrators said this requirement can limit training options for workers seeking to study at community colleges because training courses may be semester-based and not begin within the enrollment deadline. As a result, according to program administrators, workers must sometimes enroll in less suitable courses to retain their eligibility for income support. The TAA and NAFTA- TAA programs also prohibit dislocated workers from receiving income support if there is a break in training exceeding 14 days. Program administrators we interviewed explained that community colleges generally have semester breaks lasting longer than 14 days, which means that dislocated workers cannot receive any financial assistance during that period. Other factors complicating service and benefit delivery include certification delays at the Department of Labor and federal paperwork requirements. As noted in our recent report, Department of Labor delays in certifying TAA and NAFTA-TAA petitions or state program administrative office delays in approving workers’ training plans can limit workers’ options. Dislocated workers we interviewed in El Paso said that acquiring approval for their training plans had taken months. Training providers in North Carolina and Texas told us federal administrative and paperwork requirements were cumbersome, rigid, and highly bureaucratic. In North Carolina, the Trade Act Coordinator at the state Employment Security Commission must approve any request for a dislocated worker to change a community college class. Department of Labor officials said that nearly all states have centralized approval of workers’ training plans because local officials have less experience with TAA and NAFTA-TAA regulations. A number of challenges influenced implementation of TAA and NAFTA- TAA training benefits in our case study communities. One issue is related to the profile of the dislocated workers, a significant percentage of whom were in their 40’s, had not finished high school, and needed remedial courses before they could start a degree or certificate program. Many in El Paso, Tex., and Watsonville, Calif., also had limited English proficiency. Given this dislocated worker profile, there was a debate in most communities about whether workers should enroll in 2-year degree programs or take shorter certificate programs and return to the workforce as quickly as possible. These issues were particularly challenging in El Paso, which had to contend with overwhelming numbers of dislocated workers, many of whom had low educational levels and limited English proficiency. The profile of dislocated workers that emerged from our visits and discussions with program administrators, training providers, and small groups of dislocated workers is consistent with the Labor Department’s available national data on TAA and NAFTA-TAA dislocated workers. The Department’s data on these programs suggest that, nationwide, almost two-thirds of the participants were women, with an average age of 43, and generally low levels of education. Twenty-five percent had less than a high school education when laid off. According to available data and discussions with program administrators in El Paso and Watsonville, many program participants in their communities also had limited English proficiency. Officials in these two communities told us that the majority of these workers were Hispanic, spoke little or no English, and many had not gone much further than elementary or middle school. In Owosso, Mich., most dislocated workers were high school graduates. However, in Coushatta, La.; Washington and Chocowinity, N.C.; and Martinsville, Va., significant numbers of the dislocated workers had not finished high school, according to local officials. Many dislocated workers had been with their employers for a considerable number of years. We interviewed some dislocated workers in El Paso who had worked for Levi-Strauss—one for almost 30 years. Similarly, many workers at the Sunbeam plant in Coushatta and the Hamilton Beach/Proctor-Silex, Inc., plant in Washington had worked there for more than 20 years when they were laid off. Program officials and training providers stated that these dislocated workers generally had an excellent work ethic and wanted to reenter the workforce as quickly as possible. However, many did not have a high school diploma or a General Equivalency Degree (GED), and few businesses would hire them without one or the other. Thus, many dislocated workers needed GED classes and sometimes English as a Second Language classes before entering occupational training and obtaining a job. As a result of these factors, moving these workers further on the educational or job skill continuum is a challenge. Moreover, considering that the participants who did complete high school may have been in the workforce for 2 decades, it is difficult for them to reenter the educational system. Given these factors and the maximum 2 years of training available, earning an Associate of Arts degree would represent a considerable achievement yet may still leave these participants short of the skills required for the new economy jobs. While program and training officials in most communities we visited were debating the best method for retraining trade-dislocated workers, limited outcome data on wages and employment outcomes make it difficult to assess which methods are most successful. Specifically, the debate centers on enrolling dislocated workers in 2-year degree programs to pursue an Associate in Arts, Science, or General Education, which may give them more options for a new, higher-skilled career, or enrolling them in short- term certificate programs that provide specific occupational skills training. In El Paso, the County Judge and the director of a community-based training advocacy group told us that they believe the central problem facing these workers was underemployment and that training should ensure that dislocated workers have basic language skills and a 2-year degree so that they can get a better job with a living wage. They saw training as a way to help dislocated workers climb the employment ladder, rather than ending up in another low-skilled, low-wage job with little hope of advancement. The training advocacy group official said training should be viewed as an investment that would provide a good return to the local economy. Others pointed out that many dislocated workers need remedial courses to earn a GED, and in some cases they also need courses to gain proficiency in English, which would take up most, if not all, the allotted training time. They also pointed out that most dislocated workers need to complete training in 18 months, rather than the 24 months allowable, because income support ends at 18 months. They stated that it was important to ensure that dislocated workers could complete their training program and that shorter programs resulting in certification, such as those for certified nursing assistants or truck drivers, were more practical. Finally, an instructor we interviewed at El Paso Community College said it was best to take a flexible approach in meeting workers’ needs. Some workers could manage a 2-year degree program, but most needed a realistic assessment of what they could accomplish in 18 or 24 months. For most, she believed a shorter certificate course was the best option. Program data suggest that most dislocated workers in these communities are taking shorter training courses. While 11,945 workers enrolled in training during fiscal years 1995 through 2000, only 3,536 workers (30 percent) received additional allowances, which provide income support during the 12 to 18 months of program participation. (See table 2.) According to program administrators and training providers, workers generally do not continue training after income support benefits have stopped. Determining how well various training alternatives assist dislocated workers is difficult, due to the lack of outcome data on wages and reemployment. Although the Labor Department instituted a system to measure performance in fiscal year 1999, the level of responses from workers who have left the TAA and NAFTA-TAA programs has been low. To address the low response rate, the Labor Department changed its reporting system in fiscal year 2001, requiring all states to match data on dislocated workers with state wage records to determine whether dislocated workers have reentered employment and at what wage. El Paso, Tex., faced a particular challenge in providing trade adjustment assistance training benefits to dislocated workers because of the large numbers involved. In 1994, nearly half of El Paso’s 50,000 manufacturing jobs were in the apparel and textile industry. According to Texas Workforce Commission data, from January 1994 to February 2001, a total of 17,069 workers in El Paso were certified by the Labor Department for the TAA or NAFTA-TAA programs, with the majority of these jobs in the apparel and textile industry. Trade adjustment assistance program administrators in El Paso said that the sheer number of dislocated workers overwhelmed the system. In addition to the other types of assistance provided, about half of these workers enrolled in training during fiscal years 1995 through 2000. There were not enough case managers to handle the inflow, which averaged 475 cases per case manager in fiscal year 1998 and 325 in fiscal year 1999. All the dislocated workers we interviewed in El Paso were critical of the service they received, saying they were given inconsistent information, little respect by case managers, and no vocational counseling. El Paso’s training providers did not have enough seats for all the dislocated workers at the peak of the layoffs, according to an El Paso Community College official, and the college was not prepared for the bilingual training most workers required. These dislocated workers eventually ran out of trade adjustment benefits but were no more employable than they had been before entering these programs, according to local program officials. In June 1998, the Labor Department awarded El Paso a $45 million grant to assist 4,500 workers displaced between January 1, 1994, and December 31, 1998. This grant was used for the El Paso Proactive Reemployment Project (PREP), which was to provide retraining, readjustment services, and income support for workers who had not yet received occupational training under their initial TAA and NAFTA-TAA benefits. However, some officials said that PREP instead extended income support without requiring sufficient accountability for their progress. The result, according to the El Paso Community College official, was that the college was basically “warehousing dislocated workers” who needed to attend classes to obtain income support. Case study communities face fundamental challenges in restructuring economies, while the available adjustment assistance is limited, targeted, and short-term. Human capital challenges appear formidable, particularly helping workers in their mid-40’s with a high school or less education to find employment in the new economy. Physical infrastructure needed to renew economic growth is a related obstacle. While each community we visited experienced adverse economic impacts as a result of trade-related layoffs, the severity of these impacts and the communities’ responses varied. Martinsville, Va.; El Paso, Tex.; and Watsonville, Calif., have embarked on economic adjustment strategies aimed at diversifying their economies and attracting jobs to replace those that were lost. Alternatively, in Coushatta, La., and Owosso, Mich., where most trade- displaced workers have found similar but lower-paying jobs in the area, community leaders have not settled on an economic adjustment strategy, although they recognize that in the long run such strategies may be necessary. As the communities we visited work to recover from the trade-related job losses they sustained, many face fundamental challenges that will make it difficult to attract new businesses. These challenges encompass the need to develop both their human capital and physical infrastructure in order to complete the economic adjustment of their communities. One challenge consistently cited by government and civic leaders in the communities was the issue of human capital. They said that they needed to improve local educational systems, which often had high school dropout rates much higher than the national average. In several communities, local leaders said that school facilities and curricula needed to be improved to better prepare students for high-skilled jobs and to develop a more attractive environment for companies that they would like to recruit. In some communities, local officials said that they were caught in a difficult situation in which local residents who graduate from college leave for better jobs elsewhere. At the same time, they were hampered in recruiting firms that needed a college-educated workforce, in part, because they had low numbers of such workers in their area. Table 3 illustrates another aspect of the human capital challenge facing these communities. The table compares trade-dislocated workers nationwide to the total U.S. workforce. Trade-dislocated workers include a higher percentage of women, who had lower educational levels and lower average wages in the jobs they held before dislocation. While 42 percent of the nation’s total workforce has a high school education or less, 80 percent of trade-dislocated workers fall into this category. Among trade-dislocated workers, the average age was 43, and 59 percent were over the age of 40. These data suggest, and community leaders we interviewed confirmed, that trade-impacted workers tend to be less mobile and face difficulties reentering a workforce that increasingly requires more skills and training. Another challenge faced by the communities was the lack of access to major transportation arteries and inferior transportation infrastructure. Officials in three of the communities, Washington, Martinsville, and Coushatta, believe that their economic growth was hampered because their communities are not located near interstate highways. There are no four-lane highways in Coushatta or Red River Parish. While Virginia and North Carolina are considering interstate extensions that would run near Martinsville and Washington, both projects are still in the early proposal phase and, even if approved, are years away. Local land-use policies pose additional challenges to economic development and growth in Watsonville and Owosso. Watsonville is surrounded by prime agricultural land, and Santa Cruz County has imposed stringent restrictions on the conversion of this land to other uses, including manufacturing and housing. Watsonville officials said that only a limited amount of land zoned for manufacturing is currently available in the area. To expand its manufacturing base to provide employment opportunities for displaced workers, Watsonville officials believe the community must annex agricultural land, which local officials said is rarely approved. Owosso also has limited land available for new industries within its city limits. Independently governed townships, comprised of residential neighborhoods that surround the city, want to limit industrial expansion. Owosso officials said that there has been limited collaboration on economic development among area jurisdictions. Recently, however, a local development corporation has initiated an effort to promote cooperation among business and civic leaders in Owosso and surrounding areas. These communities are struggling with difficult choices needed to rebuild their economic base and retool to better compete in the national and global economies. Watsonville, Martinsville, and El Paso have what could be termed “middle path” strategies that aim for economic diversification while preparing their workforce for the high-skilled, well-paying jobs that they hope to attract. However, these communities realize that they still need jobs suitable for low-skilled displaced workers who will not qualify for new economy jobs. The other three communities have yet to come to grips with which economic adjustment course to pursue. El Paso received a $275,000 EDA grant to develop an economic adjustment strategic plan, which was adopted by the city in December 1999. The strategy encompasses a 10-year horizon and is linked to three principles: the value of human capital, access to Mexican and Latin American markets, and the radical transformation of the economy through technology. El Paso’s strategic plan recommends following a middle path between recruiting lower-wage industries to employ displaced workers and recruiting those that offer higher-wage employment. The plan’s authors argue that recruiting only lower-wage industries that could employ the current group of displaced workers would expose the city to companies concerned entirely with low-cost labor and risk further business closures if they relocate to Mexico. On the other hand, offering incentives only to higher-wage industries would create few opportunities for currently displaced workers and further strain the community’s social infrastructure. The strategy emphasizes the importance of making workforce development programs employer-driven and linking these efforts to businesses. In Martinsville/Henry County, the Patrick Henry Development Council, which is the local economic planning board, received a $350,000 grant in 2001 from the U.S. Department of Housing and Urban Development to develop an economic development strategy. The strategy seeks to diversify the area’s economic base by recruiting both high-paying technology and heavy manufacturing companies. The strategy also focuses on human capital, and the council contracted for a worker profile survey to determine what skills the local workforce needs to develop to attract high- paying companies and to assess the local educational system’s ability to produce that workforce. Martinsville/Henry County is also part of an EDA- designated Economic Development District and received a $60,000 grant in 2000 to prepare a regional strategy that focuses on mitigating adverse trade impacts. Both strategies propose and prioritize projects, such as industrial parks and business incubators, qualifying for state and federal assistance. Watsonville officials concluded that efforts to train many dislocated food processing workers have resulted in limited success. Their economic adjustment strategy thus has two goals: (1) to attract labor-intensive manufacturing jobs for which these displaced workers can qualify and (2) to improve job opportunities for the city’s youth by providing training in computer and other skills required by high-wage employers. In addition, the county and community college are collaborating on a project to support the creation and retention of quality jobs. There is also a business incubator to facilitate new business start-ups. Owosso city officials have done little economic adjustment planning, although local business leaders have taken steps to attract new jobs. Because of Owosso’s proximity to the industrial centers of Lansing and Flint, where automotive industry jobs are available, the city has not developed an economic adjustment strategy. However, the private sector in the Owosso area has taken some steps to address job loss. Business and community leaders have begun work on a plan that looks to the area’s future economic development, part of which is to develop an industrial park. Coushatta and Washington have taken some action to improve their local economies. Coushatta officials said that the town, which has only 2,299 residents, has limited resources for development, and has not developed a comprehensive economic plan. The town, however, has spent $1 million to renovate the city-owned Sunbeam building and leased it to a pillow and mattress manufacturer that employs about 200 workers, including many former Sunbeam employees. The town has also contracted with a consultant to help attract new companies. In Washington, local officials said the Beaufort County Planning Commission would begin developing an economic adjustment strategy when it filled its Executive Director position. In the meantime, the Commission is considering acquiring a building to serve as a business incubator. Officials from the communities we visited generally reported that they had received modest assistance from federal and state governments, mostly federal loan guarantees. Table 4 shows that a total of $59.5 million in economic adjustment funding was received by the communities from federal assistance programs from fiscal year 1995 to the present. Reflecting the varied sizes of the communities and the impacts they experienced, this assistance ranged from $413,000 to Coushatta, La., to $44.5 million for El Paso, Tex. In addition, the largest source of adjustment funding for these communities—$42.3 million—was provided by Community Adjustment and Investment Program (CAIP) guaranteed business loans, most of which were made in El Paso ($38.7 million). The second largest source of funding was from EDA, which provided $10.5 million to the communities. Each community received some economic development assistance from its state government. The most common forms of state assistance were business tax abatements and refunds available to new and expanding businesses in economically distressed areas. A Martinsville city official said that, since 1996, when the city was designated a State Enterprise Zone, all businesses that located or expanded in the city have taken advantage of these incentives. Some communities also received state economic development grants and loans. In Coushatta, the Louisiana State government helped fund improvements to the former Sunbeam building to make it more attractive to potential manufacturing tenants. Likewise in Martinsville, after the Tultex closure in 1999, the state of Virginia loaned the city $945,000 to construct a shell building in its industrial park and provided a $250,000 grant to convert Tultex’s former headquarters building into a business incubator. The communities received limited assistance from the two federal programs designed to mitigate the adverse economic impacts of tradethe Trade Adjustment Assistance program for firms and CAIP. The program for firms provides consulting services to trade-impacted firms to make them more competitive. According to Department of Commerce records, only one firm in the six communities, an El Paso wood cabinet company, received assistance from the program. While the program helped the company to develop an adjustment plan, the company did not implement it and did not receive funding, according to program records. Three of our case study communities received assistance from CAIP. Since 1997, businesses in El Paso have applied for and received about $38.7 million in guaranteed CAIP loans, more than any other community in the country. CAIP has also made two direct loans in El Paso, one in May 1999, for $1 million to the El Paso Workforce Collaborative to renovate a former Levi-Strauss factory to be used as a Workforce Development Center and a Business Resource Center. The second loan was made in March 2001, for $180,000 to La Mujer Obrera, an advocacy group for dislocated women workers, for equipment and working capital for a restaurant that is also a training facility. El Paso also received two CAIP grants of $450,000 aimed at improving workers’ skills, one to a plastic injection molding training and contract center, and the second to retrain workers for health field jobs. In Watsonville, CAIP has guaranteed three loans totaling $2.6 million for plant nursery businesses. A business in Martinsville obtained two CAIP guaranteed loans totaling $600,000. Near Coushatta, a portable building business received a CAIP guarantee for a $413,000 loan. Owosso was ineligible for the CAIP program because of its low unemployment rate, and while Washington was eligible, no businesses there received CAIP-guaranteed loans. Officials in most communities we visited said that CAIP assistance is not sufficient to spur economic recovery. We noted in our recent report that CAIP adds marginal benefits to trade-impacted communities because it guarantees loans that would likely have been made under existing Small Business Administration programs. Other federal programs, although not targeted specifically at trade- impacted communities, offer assistance to economically distressed areas. For example, EDA provides grants to communities in economic decline to upgrade or expand their economic infrastructure and to design and implement economic adjustment strategies. Most of our case study communities received some economic adjustment assistance from EDA and other federal agencies, but community leaders believed more was needed. In Martinsville/Henry County, a team of federal officials from several agencies, working under the coordination of the President’s National Economic Council, went there after the Tultex bankruptcy in December 1999 to explain the federal assistance available and to assist the communities in filing grant applications. In 2000, the community was awarded about $800,000 in EDA and Department of Agriculture grants for a business incubator. In 2001, a $840,000 EDA grant for a water line for a new plant was approved. In addition, the West Piedmont Planning District received a $60,000 EDA grant to prepare its economic adjustment strategy. Regarding El Paso, the city was awarded about $2.6 million in EDA economic adjustment assistance during the period 1995 to present. This amount included a $1.4 million grant to aid in converting the former Levi- Strauss plant and a $1.2 million grant to La Mujer Obrera, an advocacy group for dislocated women workers. Watsonville and El Paso, cities that historically have had high percentages of low-income residents, have had sections declared federal Enterprise Communities on the basis of their high poverty rates and low per capita income levels. Enterprise Communities received $3 million in block grants through this program, which is administered by the Departments of Agriculture or Housing and Urban Development. Businesses in these communities are eligible for tax-exempt bond financing to build or expand facilities. Enterprise Communities’ applications for other competitive federal economic and community development grants are given special rankings by the agencies that administer the grants. Watsonville, which was designated an Enterprise Community in 1995, devoted much of its grant, provided by the Department of Agriculture, to fund youth training programs. It has used other federal grants and loans to finance a downtown business incubator, two motels, and the expansion of a community college branch campus. El Paso, which also received its Enterprise Community designation in 1995, is using its funds for human capital efforts such as job training and has offered tax-exempt bond financing as an incentive to attract and encourage business expansion. El Paso was also designated an Empowerment Zone by the Department of Housing and Urban Development in 1999, based on criteria similar to those used by the Enterprise Community program. As an Empowerment Zone, the city thus far has received $19 million. The funds are being used to promote projects similar to those under the Enterprise Community program. Officials in the communities believe that they are limited in their ability to obtain federal economic adjustment assistance and cite a number of reasons. Some officials said that, without a central source of information on available economic adjustment programs, they are not always aware of those for which their communities might qualify. For example, officials at the West Piedmont Planning District, which includes Martinsville, said they are familiar with EDA and Department of Agriculture Rural Development programs but have limited knowledge of other federal programs. Officials also cited the lack of financial resources to meet the federal grant matching requirements. El Paso officials said they primarily have sought assistance from programs with no or low matching requirements. Some officials described the grant application process as time-consuming, technical, and expensive. Officials in Owosso, Coushatta, and Washington said that their communities lacked the personnel and expertise necessary to secure federal grants. Local officials believe that the scope of programs targeted at trade-impacted areas is too limited to make a difference in their communities. The six case study communities in our study pose a particularly severe test for the trade adjustment assistance programs since we used criteria that were designed to identify hard hit communities. As a result, these experiences may not be typical of communities affected by trade-related layoffs. Nevertheless, the lessons learned by these communities may be applicable to other hard hit communities, as well as to other communities where the impact of trade-related layoffs was not so severe. These lessons may also be relevant for communities where technology or other forces have led to significant job losses. These communities face long-term challenges in improving job skills and human capital of dislocated workers and developing physical infrastructure needed to attract new businesses. Those involved in worker adjustment assistance programs in the communities pointed to the need for more flexible training programs linked to the employment needs of local businesses. Community leaders working at economic adjustment efforts had a harder time drawing lessons from their experiences and have faced difficult choices and found few off-the-shelf answers. One of the fundamental challenges facing trade-impacted communities is helping dislocated workers—generally older workers with a high school degree or less—adjust to an increasingly globalized economy that requires different skills than were needed when these individuals entered the workforce. Trade adjustment assistance program administrators and training providers in the communities said that program rules regarding income support benefits limit their flexibility in addressing dislocated workers’ training needs. Although administrators believed that limiting income support to 18 months presented financial hardships that discouraged workers from completing 2-year training programs, available data indicate that most workers leave training after 1 year. However, even if these programs were more flexible, these relatively short-term training programs may not bridge the gap between these workers’ current skills and the skills they need to enter the new economy workforce. Our discussions with training providers and workers indicated that enormous sacrifices are necessary for dislocated workers — many of whom have been out of school for 20 years or more — to be successful in an educational system that has become significantly more challenging while maintaining family and other responsibilities. Still, additional funding does not mean that these challenges are easily addressed, as indicated by El Paso’s experience with a $45 million supplemental grant from the Department of Labor. Local officials learned that the necessary training infrastructure must be in place to meet dislocated workers’ needs. According to community leaders, supplemental grant funds were used to place many workers in training programs that had not been evaluated. Further, insufficient bilingual training designed for adults was available. With no clear sense that training would improve their job prospects, some workers stayed in training to continue receiving income support benefits. Eventually, grant managers began offering workers monetary incentives to leave training and take available low- skilled jobs. One of the lessons that emerged from our discussions is the central role of education and training, which included training to provide basic language skills to laid off workers, strengthening the links between business and educational institutions, or emphasizing education for the next generation of workers. In Martinsville, the local planning authority, in conjunction with businesses, has contracted for a workforce survey that will identify the human capital needed to attract and retain businesses. The survey results will serve as the basis for developing training courses at the local community college. In Owosso, the local training provider, a private college, has local business leaders sit on an advisory board to ensure that the college’s curriculum includes needed job skills training. In El Paso, a training provider that specializes in training displaced workers has worked with local businesses to develop an internship program to give students work experience. Such efforts to define local needs, establish priorities, and link education and training with jobs is a common theme among our trade-impacted communities. Finally, one of the lessons that appeared in our discussions with many community leaders was that helping the dislocated workers is the immediate challenge but does not lead to—and may even detract from— the efforts to address the longer-term structural problems. Watsonville, Martinsville, and El Paso appear to have chosen the path of economic diversification and are emphasizing the importance of education for the generations now entering the workforce, since it is often much easier for them to acquire the skills that are necessary for higher-paying jobs in the new economy. However, leaders in these communities acknowledge that they also need jobs suitable for low-skilled displaced workers who will not qualify for new economy jobs. Balancing these competing demands will determine how successfully these communities will adjust to changing national and global economic conditions. We received written comments on a draft report from the Departments of Commerce and the Treasury. The Department of Commerce said that the report fairly and accurately describes EDA assistance programs. The Department of the Treasury said that the report makes an important contribution to understanding the issues that a community faces when an abrupt change takes place in its economy as a result of job dislocations attributable to changing trade patterns. The comments we received and our evaluation of them are contained in appendixes VIII and IX. In addition, the Department of Labor provided technical comments, which we incorporated in the report as appropriate. The Department of Commerce said that our characterization of the difficulties of Martinsville, Virginia in obtaining economic adjustment assistance did not fully reflect EDA’s eligibility or competitive selection criteria for its Public Works and Economic Adjustment programs. We have revised the report to reflect EDA’s clarifications. The Department of the Treasury, while not disagreeing with our characterization of CAIP funding as limited, stated that more CAIP financing for communities was available if the communities requested it. We clarified the report by stating that businesses and potential grantees must apply for CAIP financing. CAIP does not provide funding directly to eligible counties, but rather offers access to competitively awarded grants or enhanced access to credit through loans and loan guarantees. The Treasury also disagreed with our statement that CAIP loan guarantees made in partnership with the Small Business Administration would likely have been made anyway without CAIP’s participation. We based these statements on our recent evaluation of CAIP. Our position, which remains unchanged, is that outcome measures and a monitoring system are needed to demonstrate the benefits CAIP has brought to communities. We are sending copies of this report to appropriate congressional committees. We are also sending copies of this report to the Secretary of Commerce; the Secretary of Labor; and the Secretary of the Treasury. Copies also will be made available to others upon request. If you or your staff have any questions about this report, please contact me on (202) 512-4128. Additional contacts and staff acknowledgments are listed in appendix X. Located in the center of the El Parajo Valley, in Santa Cruz County, Watsonville is on the central California coast, 48 miles from San Josè. Latinos comprise 75 percent of the population, most of whom are recent immigrants who came to work in the area’s agricultural sector. The second largest city in the county, Watsonville has an unemployment rate that is twice that of the county. Table 5 provides more details on the community’s characteristics. Watsonville’s economy has been severely impacted by increased imports of frozen vegetables and natural disasters over the last two decades, according to local officials. Imports of low-cost vegetables from Mexico, Guatemala, and other countries have resulted in the reduction of Watsonville’s frozen vegetables industry. Thus, the industry’s decline was already well under way before the North American Free trade Agreement (NAFTA) was enacted in 1994. According to the University of California, Los Angeles, North American Integration and Development Center, 510 permanent and 440 seasonal processing workers lost their jobs between 1983 and 1994. This was followed in 1996 with the closure of Dean Foods plants and the loss of 600 jobs. Natural disasters have also hurt the community, which is still recovering from the Loma Prieta earthquake of 1989. Further, in 1995, the community experienced a major flood and suffered agricultural losses. The closures of the frozen vegetable plants have had a major impact on Watsonville. Since the mid-1980s, Watsonville has lost more than 5,000 frozen food and cannery jobs, many of which were mid-level union positions that paid about $8 an hour and provided fringe benefits. For the community’s many migrant farm workers, these jobs provided an opportunity for upward mobility but were replaced with poorly paid seasonal or part-time work. The unemployment rates in Watsonville for the last 10 years (1990 through 1999) has been in the double digits, averaging 16.5 percent. In 1993, the unemployment rate peaked at 20.8 percent and declined to a low of 11.9 percent in 2000, when the annual average for the United States and Santa Cruz County was 4 percent and 5.6 percent, respectively. Nevertheless, the food processing and production industry still remains the largest employer in the area, providing employment for more than 18,000 workers. However, since 1994, the number of jobs in food processing has declined at an average annual rate of 6.8 percent. Many residents and leaders believe that unless the city can reduce its dependence on low-skilled, low-income, seasonal jobs, the community will see further economic decline. Watsonville has not experienced a large, trade-related layoff of workers since 1996, and the results of efforts to assist earlier trade-impacted workers are limited. Several local officials said that one of the main problems with the Trade Adjustment Assistance (TAA) program was that the allowance of 2 years of training might not be long enough when dealing with workers who only speak Spanish or are illiterate in their native language. They said that considering these individuals’ remedial education needs, it would probably take 3 to 5 years to train them for new jobs. For workers who had enrolled in a local training school or community college, local officials had no data on wages earned after graduation and did not know the number of workers who had found jobs in areas of their training. Rather than relocate, however, most dislocated workers have chosen to remain in the Watsonville area. Some share homes with multiple families and have found employment in plant nurseries, farms, or small businesses. To help reemploy these people, city officials said that they have focused on attracting assembly jobs that are appropriate to the local population’s skill level. They recruited a company that makes bike and snowmobile parts, which opened in 2001. The city of Watsonville has received a wide range of federal, state, and county economic adjustment assistance to help implement its strategic plan. Federal agencies have worked to deliver a variety of assistance to Watsonville. The Department of Agriculture has assisted the city with loans and grants through local partnerships with financial organizations and local economic organizations. It has also provided loans for several business projects, such as the Red Roof Inn and Holiday Inn Express hotels, a small business incubator building with office and retail space (see fig. 3), and a warehouse facility. The city was also awarded almost $200,000 from the Department of Agriculture and $500,000 from the Department of Health and Human Services to help the El Pajaro Community Development Corporation with development of the business incubator building. The El Pajaro Development Corporation is one of several local corporations working with federal agencies to improve Watsonville’s economy. Watsonville was included in the first round of cities the Department of Agriculture designated as Federal Enterprise Communities in January 1995. This designation will result in about $3 million in federal grant funds over the next 10 years and will improve the city’s ability to qualify for additional federal program funds. In its Enterprise Community application, Watsonville officials listed unemployment and the school dropout rate as their most severe community problems. Local officials also acknowledged that most training efforts are targeted at the younger population, conceding that they have largely given up on training older workers now unable to work in the local agricultural industry, primarily with strawberry production. Watsonville also has participated in the Community Adjustment and Investment Program (CAIP). Under this program, Watsonville received about an additional $2.6 million in guaranteed loans in fiscal years 1998 through 2000 to support several plant nursery business projects. Watsonville has also received more than $6 million in grants from the Economic Development Administration’s (EDA) Public Works and Economic Development Program. The funds can be used to support (1) new building construction, (2) business incubators, (3) industrial parks, (4) roads and streets, and (5) water and sewer systems. Watsonville applied for the EDA program under the Santa Cruz County Economic Adjustment Strategy plan. The projects are funded with federal, state, and local agency funds. Some of the projects EDA has approved include a new Cabrillo College satellite campus (see fig. 3), a youth training center, and a parking garage in a downtown commercial building. Watsonville’s designation as one of 39 state Enterprise Zones located throughout California went into effect in 1997 and will be active until 2012. As a state Enterprise Zone, the city’s industrial area will receive a wide array of incentives to retain, expand, and attract businesses, to diversify the community’s economic base. However, local land use decisions are frequently contentious and sometimes challenged by farmers and environmental groups in the area. According to Watsonville officials, Santa Cruz County has an ordinance prohibiting the conversion of agricultural land to industrial uses or housing developments. The county ordinance is used to encourage new development efforts to locate in urban areas to protect agricultural land and natural resources in the rural areas. Also, environmental groups are against the development of the area because of the potential effects on the Monterey Peninsula and Bay. The town of Coushatta in Red River Parish in northwest Louisiana is approximately 50 miles southwest of Shreveport (parishes are comparable to counties). Red River Parish has a small industrial base and no multilane highways or public port facilities. Table 6 presents a demographic and economic profile of Coushatta. In late 1996, the Sunbeam-Oster Corporation, a manufacturer of small appliances such as toasters and irons, announced the closure of its Coushatta, Louisiana, plant after 31 years of operation (see figure 4). The closure, effective December 31, 1996, was part of a nationwide downsizing by Sunbeam that cut more than 6,000 jobs due to increased imports. With the closure of the Sunbeam plant, a major employer in Coushatta, the unemployment rate for Red River Parish rose to 23.7 percent in 1997. A total of approximately 520 workers lost their jobs, many with more than 20 years of service. According to local officials, most workers affected by the closure were white, female, and approximately 40 years of age. The closure of the Sunbeam plant also meant the loss of a $10 million annual payroll, a decrease by one fifth of Red River Parish’s gross income, and the erosion of its tax base. Coushatta’s unemployment rate was 9.1 percent in 2000. The workers who were laid off from the Sunbeam plant received a severance package and health insurance coverage for 26 weeks. Some dislocated workers have since reentered the workforce and are employed at the same wage or higher. Some found jobs in the service sector, including retail stores or restaurants, while others found jobs in manufacturing plants in nearby towns. For instance, soon after Sunbeam closed, another company opened a plant in a nearby town hired 80 dislocated workers. Company officials said that more dislocated workers would have been hired if they had applied. Because the company requires a high school diploma or General Equivalency Degree (GED), it offers a computer-based training program to aid interested applicants in preparing for the GED and other training geared at third grade through the first year of college. Company officials said they have a strong incentive to train workers well. Local officials also said that the local education system is very poor, so some companies prefer to conduct on-the-job training for workers. As noted in table 6, in 1990, about 52 percent of people 25 years and over had graduated from high school. The TAA and NAFTA-TAA programs made one-on-one vocational counseling and career assessments available to participants by the TAA and NAFTA-TAA programs. After the Sunbeam plant closure, about 130 workers attended various classroom training programs during 1996 to 1997. Eleven of the former Sunbeam workers began on-the-job training, and 54 received Certificates of Continuing Eligibility forms, which can be redeemed for training for 2 years. The local training provider is a state technical college that provides vocational training in certificate and diploma programs and offers an associate degree program. Local program administrators told us that there is a clear trend that dislocated workers will drop out of training as soon as they can find a job rather than waiting to complete training at this school, which they said is not well regarded and has a low job placement rate. Another training provider, a private institute, located more than 50 miles from the impacted community, also trained some of the dislocated workers (see fig. 4). This school offered training in tractor-trailer driving and clerical skills and reported a 96- percent job placement rate. School officials attribute their success to the students’ commitment, stating that dislocated workers who commute to attend the institute are serious about wanting to be reemployed. After June 1997, active enrollment in training programs declined because those who had completed their training programs had found employment. Enrollment continued to fluctuate as more of the Sunbeam workers’ income support benefits ended. Some dislocated workers interested in on- the-job training were referred to companies that had active on-the-job training contracts. According to officials, most of the Sunbeam workforce requested referrals to a new company that opened in Natchitoches in early 1997. Economic and community development needs in Coushatta are met by the Coordinating and Development Corporation, a nonprofit entity that provides specialized services to northwest Louisiana’s parishes, municipalities, other industrial and economic development groups, and public organizations. The Coordinating and Development Corporation also provides assistance to individuals, especially dislocated workers. It conducted local workshops on job relocation, job skills, interviews, and resumes with the Louisiana Department of Labor. The corporation invited various training vendors to participate by presenting programs on the training options available to displaced workers. These programs were held throughout December 1996 and continued as needed over the next year. When the Sunbeam closure occurred, the Coordinating and Development Corporation provided emergency assistance by setting up an office for 2 years in the Coushatta courthouse, because its closest permanent office was 35 miles away. Coushatta does not have its own comprehensive economic development plan but is included in the Red River Parish plan. Although no single organization focuses on economic development in Coushatta, numerous officials and entities are involved in economic development efforts. For example, the Coordinating and Development Corporation prepared a needs assessment for northwest Louisiana for the year 2000 that identified barriers to economic development for each parish. Some new plants in the vicinity need water and sewer lines extended or expanded. In addition, following the Sunbeam plant closure in Coushatta, Louisiana Tech University prepared an estimated impact analysis for the corporation as a public service. The report estimated the probable economic impact of the plant’s closure on the economies of northwest Louisiana at almost $30 million. It said that Red River Parish will bear over $16 million of the burden and will likely lose 35 “secondary” sector jobs, 175 trade and service sector jobs, and 81 government sector jobs. The Sunbeam facility was owned by the town of Coushatta and leased to Sunbeam. When the plant closed in 1996, the town took the building back and maintained it until another business was brought in to take over the facility. In 1999, a mattress and pillow factory opened at the Sunbeam site. Economic development officials and the Mayor were instrumental in getting the factory to locate in Coushatta. The company now employs approximately 200 workers from the Coushatta area. Many communities within Red River Parish—including Coushatta—are located within the state’s Enterprise Zones, which provide economic incentives to businesses that locate there, including tax credits and sales and use tax rebates to businesses hiring at least 35 percent of their new employees from targeted groups. In addition, a $2,500 tax credit is generated for each new job created. An additional $2,500 tax credit may be generated in the second year of employment, if the new employee is certified as removed from state financial assistance rolls. Owosso is located in Shiawassee County between two major cities, Lansing and Flint, and near major interstate highways connecting Canada to Mexico. About 20 percent of Owosso’s workforce works in manufacturing. Service sector growth has been concentrated in lower- paying jobs, particularly the retail sector, the second lowest paying sector in the state. Table 7 provides a profile of Owosso. Owosso lost several automobile parts manufacturers to Mexico in the mid- 1990s. A total of 583 workers lost their jobs and were certified for TAA benefits. The owner of an airbag and seatcover manufacturing company closed his Owosso plant and laid off 362 workers because his firm was unable to compete with Mexican competitors. He also closed other manufacturing facilities in Michigan, laying off a total of 2,000 workers. He believed that NAFTA-related trade was a contributing factor to the closures. Civic and business leaders we interviewed said that the jobs lost did not have a major impact on Owosso’s economy because the jobs lost were replaced by retail and food service jobs. However, a civic leader we spoke to was concerned with the decline in wage rates in Owosso. For example, according to this official, someone who once made $12 to $14 per hour in the automotive parts industry now may be earning $6 to $8 per hour. People wanting better wages must commute to Lansing or Flint, Mich. Owosso has a satellite office of Career Alliance Inc., headquartered in Flint, Mich., which is designated by the state of Michigan to administer the TAA, NAFTA-TAA, and other dislocated worker programs in the area. The state’s Rapid Response Team informs dislocated workers that Career Alliance Inc., representatives are available to help them with job counseling, assessment, placement, and supportive services. In addition, Career Alliance staff visit companies where there are expected layoffs and brief workers about the types of assistance they offer. Figure 5 provides photos of downtown Owosso and the sign for the One Stop Career Center that served the dislocated workers. A local, private nonprofit college in Owosso also provides training services for the TAA and NAFTA-TAA programs. When they are notified that a plant intends to close, college personnel go to the plant to inform the workers about the occupational training programs they operate at the college and help them prepare resumes and applications for unemployment compensation. The college offers academic degree and certificate programs. A college official told us that the college’s primary goal is to make students employable. The college has an advisory board comprised of business leaders who help the college determine the training individuals need to obtain employment in the area. According to college officials, they place 100 percent of their students in jobs when they graduate, and about 87 percent obtain jobs in the field for which they were trained. In addition, a trucking company in nearby Corunna operates an on-the-job training program for the college to train dislocated workers to drive trucks. The 26-week training course provides students with a combination of classroom training at the college and hands-on training at the trucking company. The company owner said that trainees who completed this program have been successful in finding jobs. According to an Owosso civic leader, the community is enjoying a low unemployment rate and does not see a need to develop an economic development or job growth plan. Moreover, there have been difficulties in getting the local, township, and county officials together to develop joint and comprehensive approaches to economic development. He said that people do not organize at the local level to apply for assistance from the state and federal governments. In addition, there is no one entity at the local level that identifies and seeks out financial assistance. An Owosso business leader told us that about 3 years ago, several business and government leaders joined together to plan and develop an industrial park in Owosso. It has taken 3 years to bring parties from neighboring Corunna and Owosso to the point that they can work on a project that would benefit them all. This group will be receiving financial assistance from the Michigan Economic Development Corporation to help fund water, sewer, and road connections to the park. Owosso’s community development director said that the town has taken advantage of federal and state grants and tax incentives to help stimulate economic growth in the community over the last 5 years. For example, Owosso received federal Urban Development Action Grant funds to improve the city’s retail sector and a $3 million loan from a Department of Agriculture Natural Resource Conservation Service program to build a downtown hotel. Owosso also uses local and state tax incentives to attract new businesses. To help promote economic development in Owosso, companies are offered tax abatements when they locate in Owosso or increase the number of employees they employ. They also have a tax increment and financing program, in which the community takes certain corporate taxes and uses them to upgrade the city’s road, sewer, and water systems. The communities of Washington and Chocowinity are located on the Pamlico River in Beaufort County in eastern North Carolina (see fig. 6). Washington, the largest town in the county, has a population of just over 10,000. Chocowinity, its immediate neighbor, has a population of less than 1,000. The unemployment rate for Beaufort County is 7.6 percent. Between 1997 and 1999, 3,880 workers were certified by TAA or NAFTA-TAA due to layoffs and plant closures at several companies, including Singer Furniture in Chocowinity (see fig. 6) and Hamilton Beach/Proctor-Silex, Inc., in Washington. About 85 percent of dislocated workers from these companies lived in Beaufort County. Table 8 provides more details on the community characteristics. Between 1997 and 1999, Beaufort County experienced massive trade- related layoffs of workers. The closure of two manufacturing companies resulted in the displacement of more than 1,500 workers, many of whom were female, between 40 and 50 years of age, with minimal skills and low educational levels. The community’s unemployment rate rose immediately following these layoffs and has since declined. Community officials said that many workers have obtained employment, generally at a lower wage in service sector businesses such as retail stores or restaurants. Economically, Beaufort County ranks lower than most North Carolina counties, with a lower per capita income, lower educational levels, higher unemployment, and higher poverty rates. In the past few years, Beaufort County has experienced plant closures and layoffs in excess of 1,500 workers. Several manufacturing companies or their employees filed petitions with the U.S. Department of Labor and were certified for NAFTA-TAA and TAA benefits following initial certification by the North Carolina Department of Commerce Employment Security Commission. Singer Furniture employees were the first to file a TAA petition in Beaufort County. Hamilton Beach/Proctor Silex, Inc. workers shortly followed. Once the petition was approved, dislocated workers met with a group of community and program officials to discuss financial assistance and employment and training options. However, program officials in Beaufort had not dealt with the Department of Labor’s trade adjustment assistance programs prior to these layoffs, and initially they were unsure what benefits were available to workers or how to administer the programs. As a result of these problems and the length of time it takes for certification, many workers were provided benefits under other programs such as the Workforce Investment Act dislocated workers program. In Beaufort County, the local one-stop service center, or JobLink Career Center, located in Washington, provides counseling services, resume writing, needs assessment, training, labor market information, and employment opportunities. The JobLink Career Center houses and coordinates services provided by numerous agencies in North Carolina. JobLink officials strongly encourage dislocated workers to enroll in GED programs or adult basic skills training as soon as possible. Officials stated that approximately 22 percent of dislocated workers enroll in training; however, little data are available on the number of individuals that complete training. Officials estimated that only about 25 percent of the dislocated workers that enter GED programs ever continue further. One reason for this high dropout rate is that the local community college requires students to obtain their GED before entering any other type of training at the college. However, Beaufort County Community College does provide GED training and adult basic skills training free of charge prior to entering certificate or associate degree programs. Another reason for the high dropout rate is that income support benefit payments do not coincide with the period allowed for training, and dislocated workers drop out of training as soon as their income support benefits are exhausted. Numerous officials in Beaufort County stated that a major problem with TAA and NAFTA-TAA is that the program allows for 104 weeks of training, but financial benefits are provided for only 78 weeks. They said that this situation contributes significantly to dislocated workers dropping out of training before completing the course. Officials said that they strive to train dislocated workers for new occupations because, in their view, dislocated workers are no better off taking another job in industries threatened by increased trade, like textiles. However, opportunities to train for new occupations are limited because the community does not have any private training institutes, and Beaufort County Community College is the only training facility within a reasonable commute for county residents. The community college has developed courses to meet the needs of the dislocated workers for training to enter other occupations. However, many of the programs offered are for college students earning associate degrees and are semester based. In addition, an estimated 25 percent of the dislocated workers from Hamilton Beach/Proctor-Silex, Inc., and more than 50 percent of the dislocated workers from Singer Furniture did not have high school degrees or GEDs; however, many of these individuals were reemployed immediately following the layoff at very low-paying, low-skilled jobs, according to local officials. Economic development is the most important issue for Beaufort County, according to county economic development officials, because the industrial base is changing. Community officials recognize the need to bring in other types of industries, since much of their business has been manufacturing. This is shifting due to changes in the global economy. Yet, the area has few economic development efforts currently under way. However, the county has recently created the Beaufort County Economic Development Commission to begin addressing emerging needs. The commission does not yet have a strategic plan. Beaufort County has received state Industrial Recruitment Competition Funds, which are designed to provide incentives for companies to locate in economically distressed areas. Specifically, four companies in the county received a total of $400,000 in commitments that provided $1,000 for each job created. Beaufort County did not apply for any Community Development Block Grant funds over the last few years even though, as the Director of the state Department of Commerce’s Finance Center explained, Beaufort may have been eligible for them. He told us that these funds frequently come with “too many strings attached,” citing the stringent requirements that generally accompany these funds and the fact that community officials want more flexibility than the funds offer. In addition, the Community Development Block Grant application is about 40 pages long, is extremely time consuming, and is difficult to prepare. Generally, small communities such as Beaufort County hire a contractor to write the grant application. Community officials said that Beaufort County applied for but was denied a CAIP grant. Officials also said they did not have sufficient staff resources to adequately develop the CAIP grant proposal. Officials stated that Beaufort County’s most pressing economic development needs are infrastructure and natural gas. Insufficient roadways to carry vehicles and trucks through Beaufort County, combined with the long distance from Interstate 95, contribute greatly to the slow economic development of Beaufort County. In addition, the county needs to build additional gas lines for industries before they will move into the area, according to local officials. One avenue of potential economic development is that the community of Washington and Chocowinity is becoming a popular area for retirees from northern states. Officials explained that both towns are located along the Pamlico River, which has drawn a great number of boaters. The community has built a major retirement community with large and expensive homes, a marina, a golf course, and a restaurant. As a result, the community is now also building a supermarket and drugstore to meet the needs of these new residents. Located at the western tip of Texas, El Paso borders New Mexico and the Mexican state of Chihuahua. It is the fourth largest city in Texas (see fig. 7), with a population of 563,662, 77 percent of whom are Hispanic or Latino. Ciudad Juarez, El Paso’s sister city across the border, has about 1.2 million in population. In 2000, their combined population was an estimated 1.9 million. Additional community characteristics are shown in table 9. In 1994, nearly half of El Paso’s 50,000 manufacturing jobs were in the apparel and textile industry. Since January 1994, 17,069 workers in El Paso have been certified as dislocated by NAFTA, with the majority having lost jobs in the apparel and textile industry. Jobs were also lost in other sectors, such as electronics assembly and plastic injection molding, which, like apparel, involved labor-intensive, low-skilled jobs. El Paso’s proximity to Mexico further accentuated this nationwide trend. As a result, El Paso has the unfortunate distinction of having experienced the greatest number of NAFTA-related job losses in the United States. While El Paso has experienced a net increase in jobs since 1994, these new jobs have required skill levels and language abilities beyond the capacity of most dislocated workers in El Paso, who were Hispanic, female, single heads of household, over the age of 40, with less than a high school education and limited English proficiency. Most had worked for years in the apparel industry, earning relatively good wages and benefits at companies like Levi-Strauss (see fig. 7). When their factories closed and relocated abroad, they could not get similar jobs to replace them. At the same time, they were not qualified for the new jobs being created in El Paso, most of which required high school or postsecondary degrees and English proficiency. One dislocated worker told us that he had been earning $13.65 an hour plus benefits at Levi-Strauss and it was difficult to find another job at that pay rate with his limited education and English. In this regard, El Paso faced its greatest challenge in assisting its dislocated workers: it was not just a matter of training dislocated workers for new occupational skills. Many workers first had to attain basic English proficiency and make up for many years of missed education to earn a GED before they could start to learn a new occupational skill. By the end of 1996, more than 7,000 dislocated workers had filed for TAA or NAFTA-TAA benefits. At the same time, concern was growing that many dislocated workers who were coming to the end of their TAA or NAFTA-TAA benefits were still trying to learn English and earn their GED and had not received any occupational training that would help them get another job. These concerns escalated in 1997 and 1998 as the layoffs and plant closures continued, becoming “traumatic” as Levi-Strauss closed four El Paso plants in 1997, displacing 1,959 workers, and another two plants with 796 workers in 1999. At the same time, Sun Apparel, American Garment, and other apparel firms also closed plants in El Paso. The Chief Executive Officer of the Upper Rio Grande Workforce Development Board said that in prior years, garment workers had always been able to get a job in another plant. By the summer of 1998, he said that there was virtually nowhere to go for a new apparel job. In April 1997, the Texas Workforce Commission instituted the El Paso Re- employment Pilot Project to address the needs of massive numbers of dislocated workers. It served 450 dislocated workers who had been identified as needing intensive case management, job development, bilingual vocational training, intensive work-based language training, and needs-related payments (extended income support) as elements of a program that would increase the likelihood of employment for these workers. The project was designed to serve workers who had exhausted their trade benefits but had never entered vocational skills training. It also identified the need to bring employers into the project. However, with 72 trade-certified closings between January 1, 1994, and March 1, 1998, affecting 8,173 workers, it soon became clear that a far larger effort than the Texas state Pilot Project would be needed. In March 1998, the Upper Rio Grande Workforce Development Board submitted a grant application for a Department of Labor National Reserve Account Grant. It was for 2-year funding in the amount of $55.5 million to provide administration, retraining, readjustment, supportive services, and needs-related payments to 3,500 eligible trade-dislocated workers in El Paso. The El Paso Proactive Reemployment Project, as finally approved, provided $45 million over 3 years for 4,500 dislocated workers. Workers who had been laid off between January 1, 1994, and December 31, 1998, and who had not yet received occupational skills training were qualified. The other component of the community’s response to the overwhelming numbers of dislocated workers was to develop the former Levi-Strauss Lomaland plant as a comprehensive One-Stop Workforce Preparedness Center and a One-Stop Capital Shop. The Greater El Paso Chamber of Commerce took the lead in this physical infrastructure project, which had broad community support and participation. The Chamber Foundation purchased the facility from Levi-Strauss in 1998 and converted it into the One-Stop Center. In addition, the Chamber also received a $1 million direct loan from the Community Adjustment and Investment Program and a $1.4 million grant from the Economic Development Administration to rehabilitate the building. In addition, Levi’s also gave the Chamber a $250,000 grant to fund its Workforce Development Division, which would oversee the implementation of the strategic plan for the center. The results of the efforts in El Paso to assist the trade-dislocated workers were mixed at best. Local officials said the trade adjustment assistance programs were overwhelmed by the large numbers of workers that continually entered the system. According to these officials, the Texas Workforce Commission did not have enough case managers to handle the inflow, and when they hired new case managers, they were not experienced enough to handle their heavy caseloads. Another problem was that all training had to be approved by commission headquarters in Austin, causing delays. In 1997, the commission finally delegated approval authority to the commission’s local El Paso Trade Unit when it became logistically impossible to refer every case to Austin for approval. El Paso’s Trade Unit is the only one to have this approval authority, and the waiting time for approval of training declined from 12 weeks to 4 to 6 weeks. Commission officials said the current wait is about 3 weeks, which is due to the funding flow. Officials said that the funding stream available to El Paso could not keep up with their needs and that they frequently ran out of training funds. Perhaps the greatest criticism of assistance efforts we heard related to the ineffectiveness of the training. According to program officials, El Paso’s training efforts were hindered by the lack of sufficient training infrastructure to meet the needs of its displaced workers. Many of these workers had low educational levels and little English proficiency. They needed bilingual training designed for adults. One private training institute we visited, cited as a model bilingual training provider by Texas Workforce Commission officials, used a training method of intensive English as a Second Language classes and GED classes in Spanish in the mornings, and occupational skills training in the afternoon. The occupational training started out in Spanish and shifted gradually to English as students’ proficiency increased. However, there were few such bilingual training programs in El Paso, and many dislocated workers languished in English as a Second Language and English-language GED courses without making enough progress to move on to occupational training courses, according to local program officials. El Paso received a $275,000 grant from EDA in 1998 to have an economic adjustment strategic plan developed by an economic consulting company. The plan, published in December 1999, confirmed that El Paso was following the national trend in making a transition to a service economy. The plan focused on encouraging job growth in the near term in demand- related industries such as retail trade, healthcare, services, and construction, all industries that could potentially employ displaced workers. Over the longer term, the plan focused on the need for skilled workers and access to technology, especially information technology, as the principal components of adding value to the city’s economy. The city also received other assistance from EDA. EDA had previously awarded a grant of $500,000 for a revolving loan fund that had lain dormant. Working together with a newly elected mayor, this grant was reactivated, and matching funds of $167,000 were obtained from El Paso County. The revolving loan fund has provided 6 loans to small businesses, helping to create or retain 45 jobs, according to city economic development officials. The EDA grant of $1.4 million to the Greater El Paso Chamber of Commerce Foundation to rehabilitate the former Levi-Strauss Lomaland plant, mentioned above, was the centerpiece of its economic adjustment assistance efforts in El Paso. The EDA Regional Director stated that this project was a successful model of a public-private partnership that addressed important economic adjustment needs. EDA, CAIP, the Greater El Paso Chamber of Commerce, and Levi-Strauss all participated in this effort. In addition, EDA also awarded a grant of $1.2 million to a local advocacy group for dislocated women workers, La Mujer Obrera, that had initiated its own community development corporation, based on a community self- help model. The group obtained the needed matching funds from the city of El Paso and the Rural Development Finance Corporation to acquire and rehabilitate the building. CAIP also provided a $180,000 direct loan to this group for equipment and working capital for a Mexican restaurant they started in part of the building. The group also plans a business incubator, a Mexican market, and other initiatives designed to create training opportunities and new jobs for displaced workers. In addition, the Hispanic Chamber of Commerce received a grant for $750,000 over 5 years from the Small Business Administration for a Women’s Small Business Border Center, which will be located at the group’s facility. CAIP has been very active in El Paso. Besides the two direct loans already mentioned, since 1997 it has provided more than $38.7 million in loan guarantees made in El Paso through its partnership with Small Business Administration loan guarantee programs. Of this amount, CAIP provided 145 loan guarantees with a gross loan amount of $36.6 million under the Small Business Administration 7(a) program and 4 loan guarantees under the Small Business Administration 504 program valued at $2.1 million. El Paso also received two CAIP grants of $450,000 for two private training programs to train dislocated workers in locally needed occupations and place them in private sector jobs. The city of Martinsville, which lies in Henry County, is located in southwestern Virginia near the North Carolina State line. In Virginia, cities and counties are separate governmental entities. The economies of Martinsville and Henry County are highly dependent on the manufacturing sector, which mostly offers low-skill jobs in the textile and furniture industries. Table 10 presents a demographic and economic profile of Martinsville and Henry County. The manufacturing sector in Martinsville and Henry County has been in decline in recent years due in large part to increased foreign competition, which has resulted in a large number of job losses. Since 1993, Martinsville and Henry County have lost more than 6,000 jobs, the majority of which were in manufacturing. Most of these job losses were trade- related, as indicated by the fact that more than 3,500 of the laid-off workers were certified as eligible for TAA or NAFTA-TAA benefits. The manufacturing decline in the Martinsville and Henry County economy culminated in December 1999 when one of the area’s largest employers, the Tultex Corporation, unexpectedly went bankrupt and closed its operations. Tultex, which manufactured knit goods, employed over 1,700 workers in Martinsville, all of whom lost their jobs. Most workers were given only a few days’ notice. None of the workers received severance packages, and most lost their health benefits. The Virginia Employment Commission sent its Rapid Response Team to assist the Tultex workers, making sure they were quickly enrolled for unemployment insurance and informed of available benefits (see fig. 8). The Tultex and other plant closings and layoffs have had a tremendous impact on the local economy. Unemployment in Martinsville went from 9.3 percent before the layoffs to 19.7 percent immediately afterward. Henry County also experienced a spike in unemployment, although not as severe as in Martinsville. In addition to massive job losses, the Tultex closure also significantly impacted Martinsville’ s finances. For example, according to a city official, as a result of the Tultex bankruptcy, Martinsville lost $1.1 million in tax revenues in 1999. According to a local real estate agent, the housing market also declined, because homeowners have left the area for new jobs, and there are few buyers for these homes. In addition, local community leaders said that decreased incomes have had a negative effect on retail sales in the area. Local businesses that supplied Tultex and other closed plants also suffered. We found that many trade-impacted workers did not enroll in training. According to our analysis of Virginia Employment Commission data, less than 20 percent of workers certified for TAA and NAFTA-TAA benefits in the Martinsville/Henry County area during 1999 and 2000 had enrolled in training. Virginia Employment Commission officials said that some workers decided to forgo training to search for new jobs to support their families. A number of trade-impacted workers who enrolled in a training program did not complete it. Most workers took classes at Patrick Henry Community College in Martinsville, which offers 1- and 2-year programs (see fig. 8). According to officials at the Virginia Employment Commission and the community college, some workers who enrolled in 2-year programs were forced to drop out when their extended income support benefits ended after 18 months. Other workers, many of whom had never completed high school, were required to take remedial classes before entering occupational training. Several of these workers could not complete remedial classes and 1-year occupational training before their income support benefits ran out. To give employers an incentive to hire unskilled workers eligible for trade adjustment assistance who do not participate in classroom training, the Virginia Employment Commission offers an on-the-job training program. Under this program, employers are reimbursed for half of a worker’s wages for a training period of up to 26 weeks. Employers eligible for the program must agree to employ the worker for at least 26 weeks after the training period has been completed. Virginia Employment Commission officials told us that few workers have chosen such on-the-job training. One reason they cite for low participation is that many of these positions are in the furniture and textile industries, where long-term job security is a concern. Virginia Employment Commission officials and several community leaders told us that there are few jobs in the area for workers who complete training. In some cases, workers who trained for traditional jobs such as bookkeeping could not find work, because all existing jobs were filled. In other instances, workers received training in high-tech occupations that do not yet exist in Martinsville and Henry County. Workers we interviewed who are currently enrolled in training said that they were resigned to the fact that they may have to seek jobs in their chosen professions in larger cities such as Greensboro, North Carolina, which is about 50 miles from Martinsville. Martinsville and Henry County have undertaken a number of economic development efforts to help the community recover from the massive layoffs of recent years. As part of an Economic Development District designated by the Economic Development Administration, the community is required to prepare an annual comprehensive economic development strategy. It also received extensive attention from the National Economic Council, which coordinated visits to the areas by federal officials. The Economic Development District also has received a $60,000 grant from EDA to prepare a regional economic adjustment strategy that focuses on the adverse impacts of trade. Both strategies propose and prioritize projects, such as industrial parks and business incubators, that qualify for state and federal economic assistance. Since 1999, Martinsville and Henry Country have received $1.7 million in EDA and Department of Agriculture grants. The communities also have been awarded $1.6 million in state grants and loans, and they qualify for special state tax incentives for businesses to locate or expand in the area. In addition, the local economic planning board, the Patrick Henry Development Council, has developed a strategy to promote economic development in Martinsville/Henry County that centers on recruiting and retaining industries, attracting new capital investment, and increasing tax revenues. A number of efforts to implement plans for economic development are ongoing in Martinsville and Henry County. The Patrick Henry Development Council recently contracted with two consulting firms to develop a profile of the local workforce. The profile is intended to promote the workforce skills currently available in the area to prospective new businesses and to point out skill gaps that need to be filled to bring in new businesses offering stable, well-paying jobs. The council also has an ongoing campaign to attract new businesses, including running an ad in the Wall Street Journal and distributing a CD-ROM that promotes the area. The council has financed these projects through a $350,000 grant it received from the Department of Housing and Urban Development. Martinsville and Henry County have seen a net increase in jobs in recent years, according to the Economic Development District, but local officials are quick to point out that many new jobs are low skilled and low wage. Since 1993, Martinsville and Henry County have lost 6,364 jobs, mostly in the textile and furniture industries. During the same period, 7,043 new jobs were announced for a net gain of 679. The Chairman and Ranking Member of the Senate Committee on Finance asked us to follow up on our prior evaluations of federal trade adjustment assistance programs with case studies focused on the experiences of trade-impacted communities. Specifically, we examined (1) the impact of trade-related layoffs on these communities, (2) the experiences of these communities with dislocated worker assistance, (3) their experiences with economic adjustment assistance, and (4) the lessons learned from these communities’ experiences. To address all of these objectives, we conducted case studies in six communities. Between January and March 2001, we visited Watsonville, California; Coushatta, Louisiana; Owosso, Michigan; Washington and Chocowinity, North Carolina; El Paso, Texas; and Martinsville and Henry County, Virginia. We chose these locations on the basis of criteria designed to identify communities hardest hit by trade-related layoffs. First, we identified the total number of workers certified for trade adjustment assistance for fiscal years 1994 to 1999. We then analyzed the top industry sectors and divided cases by community, state, economic sector, and number of certified workers from fiscal years 1994 to 1999. We then added the number of workers certified across communities for these fiscal years; this yielded a list of about 300 communities that had more than 500 workers certified to receive TAA benefits. Next, we obtained 1999 population data for these communities from the Bureau of the Census Web site and calculated the percentage of individuals potentially affected by trade-related layoffs. Then by considering this factor, as well as region, industry sector, and presence of federal program activity such as CAIP, we pared the list to 48 communities. After that, we obtained city and county unemployment data for fiscal years 1994 to 1999 and October 2000 from the Bureau of Labor Statistics Web site in order to determine whether the trade-related layoffs had influenced local unemployment levels and the current local unemployment level. We then ranked the communities on three dimensions: (1) the current unemployment rate, (2) the percentage of the local population that was covered by a Department of Labor certification, and (3) the communities with the biggest change in their peak unemployment rate following a trade-related layoff from 1994 to the present. We called state and local officials to verify the nature and extent of trade-related job losses in these communities and the types of assistance that had been used, that is, training, community assistance, or grants provided by Commerce’s Economic Development Administration, CAIP, and the Department of Labor. We dropped one community from our list because their employer had recalled the trade-certified workers. When we visited each community, we interviewed local government officials; community leaders; training providers; and, if available, workers receiving benefits from trade adjustment assistance programs. During our visit to Watsonville, we also interviewed Department of Agriculture field officials regarding Agriculture’s economic assistance in the area. We also obtained in each of these communities, if available, information on worker training programs, economic planning documents, and documentation regarding state and federal economic assistance. To further address the first three objectives, we met with federal and state officials who administer trade adjustment assistance programs for workers and economic assistance programs for communities and reviewed reports on worker and economic adjustment assistance. In Washington, D.C., we met with officials from several agencies, including the Department of Labor, which administers the TAA and NAFTA-TAA programs; the Department of the Treasury, which is the lead agency administering CAIP; and the Economic Development Administration, which administers the Trade Adjustment Assistance program for firms and provides economic assistance to distressed communities. We also discussed economic adjustment efforts in communities affected by military base closures with staff from the Office of Economic Adjustment in the Department of Defense to determine lessons learned that could be applied to our case study communities. In addition, we met with the Director of CAIP’s Los Angeles office to discuss CAIP assistance to potential case study sites. We also interviewed a number of state officials to discuss their administration of the TAA and NAFTA-TAA programs, as well as the state economic assistance available to case study communities. We reviewed our prior reports on trade adjustment assistance to workers and economic assistance to communities, as well as reports by other organizations on the impact of trade on workers and communities. In addressing the first three objectives, we also obtained and analyzed data from several sources. For the first objective on the impact of trade-related layoffs on the case study communities, we first obtained and analyzed Department of Labor data on the number of workers certified for TAA benefits in the six communities from 1995 through 2000. These data only represents workers potentially displaced for trade-related reasons, not actual jobs lost. However, it is the best indicator available on the potential effect of trade on workers. As an indicator of the impact of trade-related layoffs on the communities, we examined Bureau of Labor Statistics unemployment data for the periods before and after trade-related layoffs in the communities. For the second objective on the experiences of the case study communities with dislocated worker assistance, we obtained and analyzed information from two other Department of Labor databases. To determine the number of recipients and costs of trade readjustment allowance payments and training under TAA and NAFTA-TAA in the six communities, we obtained and analyzed Department of Labor data on services provided to participants under the programs for fiscal years 1995 to 2000. We also obtained participant outcome data for the six communities and nationwide collected by Labor for 1999 and 2000 and analyzed it, focusing on demographic characteristics, wages, training, and reemployment. For the third objective on the communities’ experiences with economic adjustment assistance, we obtained and analyzed information from EDA, the Department of Agriculture, and Department of Housing and Urban Development on the amount and types of assistance that the agencies provided to each community. We did the same with information from the Treasury on CAIP assistance to the communities. We conducted our work from November 2000 through June 2001 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of Commerce’s letter dated August 17, 2001. 1. The statement in the draft report on page 29 said that this grant was pending as of May 2001, which was consistent with the information provided to us by EDA. We have updated the text to reflect that it has since been awarded. 2. Based on clarifications provided by EDA, we have dropped this statement from the text. The following are GAO’s comments on the Department of the Treasury’s letter dated August 16, 2001. 1. The Department of the Treasury agreed with our characterization of CAIP funding as limited, but pointed out that more CAIP financing for communities was potentially available, if the communities requested it. We clarified the report by adding a statement that businesses and potential grantees must apply for CAIP financing. CAIP does not provide funding directly to eligible counties, but rather offers access to competitively awarded grants or enhanced to credit through loans and loan guarantees. 2. We revised the text to add that the restaurant was also being used as a training facility. 3. The Treasury disagreed with statements in the draft report that are based on our recent evaluation of CAIP, that CAIP loan guarantees made in partnership with the Small Business Administration would likely have been made anyway without CAIP’s participation. Our position, which remains unchanged, is that outcome measures and a monitoring system are needed to demonstrate the benefits CAIP has brought to communities. In addition to the person named above, Leyla Kazaz, Ed Laughlin, Chris Shine, Larry Thomas, Bill Hansbury, Bob DeRoy, Kathleen Joyce, and Lynn Cothern made key contributions to this report. | This report reviews trade adjustment assistance and other assistance programs, such as the North American Free Trade Agreement Transitional Adjustment Assistance (NAFTA-TAA) program, to determine if they have helped distressed communities deal with the adverse impacts of trade. GAO conducted case studies in six such trade-impacted communities, all of which experienced major trade-related plant closures and layoffs in the mid- to late-1990s. Two communities lost a large percentage of local jobs in sudden plant closures and experienced economic crises. The other communities experienced rolling layoffs or a series of smaller plant closures that dislocated as many or more workers but did so gradually. Experiences in the communities GAO visited indicate that Temporary Adjustment Assistance (TAA) and NAFTA-TAA assistance to dislocated workers, although substantial, could be implemented more effectively. Program administrators and training providers in each community said that the programs have structural problems that impede effective service delivery. One factor that influenced the implementation of training benefits in many communities is that a significant percentage of dislocated workers needed to earn a high school equivalency degree or take remedial courses before they could even start a training program. Case study communities' experience with economic adjustment showed that the assistance available to them was limited and that there are no easy answers to community recovery, even when funds are available. These communities had relied on low-skilled manufacturing jobs, which are disappearing, and now face the difficult task of diversifying their economies while addressing fundamental human capital issues. These communities' experiences with efforts to assist dislocated workers and adjust to changing economic conditions offer several lessons. Program administrators and training providers said that bureaucratic rigidities in dislocated workers programs limited their flexibility in addressing dislocated workers' diverse training needs. Also, local officials believe that dislocated worker training programs are more effective and job placements much higher when strong links exist between training and local business needs. |
State is the lead agency responsible for implementing American foreign policy and representing the United States abroad. It staffs approximately 268 embassies, consulates, and other posts with over 8,000 Foreign Service positions overseas. Roughly two-thirds of these posts are in locations that qualify for a special salary differential to compensate officers for the harsh living conditions experienced there. The differential ranges from 5 to 35 percent of basic pay and is determined by a number of factors including extraordinarily difficult living conditions, excessive physical hardship, or notably unhealthy conditions affecting at least a majority of employees stationed at such a post. Figure 1 shows the distribution of overseas posts and positions by hardship differential. In general, tours of duty are two years in the United States and at 20 percent and 25 percent hardship posts. Tours at other posts are generally three years, although a number of posts in locations too dangerous for some family members to accompany an officer carry 1-year tours. FSOs serving abroad fall into two broad categories: generalists and specialists. FSO generalists help formulate and implement the foreign policy of the United States and are grouped into five career tracks: management, consular, economic, political, and public diplomacy. FSO specialists provide support services at overseas posts worldwide or in Washington, D.C., and are grouped into seven major categories: administration, construction engineering, information technology, international information and English language programs, medical and health, office management, and security. State requires its FSOs to be available for service anywhere in the world, and reserves the ability to direct officers to any of its posts overseas or to its Washington headquarters. However, directed assignments are rare. The process of assigning FSOs to their positions typically begins when the staff receive a list of upcoming vacancies for which they may compete. Staff then submit a list of positions for which they want to be considered, or “bids,” to the Office of Career Development and Assignments (HR/CDA) and consult with their career development officer. The process varies depending on an officer’s grade and functional specialty: Entry-level officers’ assignments are directed by the En of HR/CDA with little input from the posts or bureaus. Mid-level officers consult with bureaus and overseas posts to market themselves for their desired position panels to finalize the assignments. s. Subsequently, HR/CDA convenes Senior-level officers are selected for their positions by the Director General, following approval of policy-level positions by a special committee. As with mid-level officers, HR/CDA convenes a panel to finalize the assignments. In recent years, State has taken a series of measures to address gaps and reallocate staff to emerging priority nations. In 2002, State implemented the Diplomatic Readiness Initiative (DRI) to address staffing and training gaps that, according to the department, endangered U.S. diplomatic readiness. Through the DRI—a 3-year, $197 million program—State hired 1,069 new foreign and civil service employees above attrition. However, as we previously reported, most of this increase was absorbed by the demand for personnel in Iraq and Afghanistan. In 2006, State introduced the Global Repositioning Program, which reallocated existing positions to emerging high-priority countries in the Middle East, Asia, and Africa. The primary focus of this program was to move political, economic, and public diplomacy officers from places like Washington and Europe to countries of increasing strategic importance such as China and India. Despite some progress since we last reported in 2006, State has continued to face staffing and experience gaps at hardship posts that may compromise its diplomatic readiness. Several factors contribute to gaps at hardship posts, including State’s overall staff shortage, which is compounded by the significant personnel demands of Iraq and Afghanistan, and a mid-level staffing deficit that has been reduced, but not eliminated. Moreover, State continues to experience difficulty in attracting officers to hardship posts and its assignment system does not explicitly address the experience gap at these posts. Staffing and experience gaps at hardship posts can diminish diplomatic readiness in a variety of ways, according to current and former State officials, including by reducing reporting coverage, weakening institutional knowledge, and increasing the supervisory burden on senior staff. State continues to face staffing and experience gaps at hardship posts, including many of significant strategic importance to the United States. First, State has faced difficulty in filling critical positions at hardship posts. In its FY 2007 Annual Performance Report, State identified staffing of critical positions—designated positions at the posts of greatest hardship (those with hardship differentials of at least 25 percent)—as a key priority, noting that such positions are often on the forefront of U.S. policy interests. As such, State established a target for fiscal year 2007 of filling 90 percent of such critical positions with qualified bidders by the end of the assignments cycle. However, State reported filling 75 percent of its critical positions, thereby missing its target. State further noted that it would be unable to fill more than 75 percent of critical positions until its resource needs were met. Subsequently, the department lowered its target to 75 percent for fiscal year 2008, which it reported it met. In addition to staffing gaps specific to critical positions, State faces its highest rate of vacancies at the posts of greatest hardship. As of September 2008, State had a 17 percent average vacancy rate at the posts of greatest hardship—nearly double the average rate of 9 percent at posts with no hardship differentials. Vacancies at posts we visited during our review included a mid-level public affairs position in Jeddah, Saudi Arabia, that was vacant as of September 2008 and, at the time of our March 2009 visit, was not expected to be filled until June 2009. Similarly, a section chief in Lagos, Nigeria, stated that prior to his arrival at post in August 2008, his position had been vacant for nearly a year. Although there were few vacancies in Shenyang, China, at the time of our visit, nearly one-quarter of the staffed positions had been vacant for 4 months or more before their current incumbents arrived. Beyond higher position vacancy rates, posts of greatest hardship face experience gaps due to a higher rate of staff filling positions above their own grades (see table 1). As of September 2008, about 34 percent of mid- level generalist positions at posts of greatest hardship were filled by officers in upstretch assignments—15 percentage points higher than the upstretch rate for comparable positions at posts with no or low differentials. Furthermore, as of the same date, 25 of 34 (over 70 percent) of all overseas generalists working two grades above their rank were located at hardship posts. At posts we visited during our review, we observed numerous officers working in positions above their rank. For example, in Abuja, Nigeria, more than 4 in every 10 positions were staffed by officers in upstretch assignments, including several employees working in positions two grades above their own. We also found multiple officers in upstretch assignments in Shenyang, including one mid-level consular position that officials stated has never been filled at grade. A number of factors lead to gaps at hardship posts, including: State’s overall staff shortage, which is compounded by the significant personnel demands of Iraq and Afghanistan; a persistent mid-level staffing deficit exacerbated by continued low bidding on hardship posts; and an assignment system that does not explicitly address the continuing experience gap at hardship posts. As of April 2009, State had about 1,650 vacant Foreign Service positions in total. Approximately 270 of these vacancies were due to State not having enough employees to fill all of its positions—a shortfall that has grown since our last report. Officers attending training or rotating from post to post without replacements to fill their positions accounted for most of the remaining 1,380 vacancies. As we reported in 2006, State implemented DRI with the intention of hiring enough new employees above attrition to allow staff time for critical job training—also referred to as a “training float”—and to respond to emerging crises. However, as we previously reported, this goal became quickly outdated largely due to staffing demands for Iraq and Afghanistan. In particular, due to the overall shortage of FSOs and the high priority of meeting Iraq and Afghanistan’s staffing needs, bureaus have had to identify nearly 670 positions to leave unfilled, or “frozen,” since 2005. As a result, State has generally been able to find candidates to fill positions in Iraq and Afghanistan—its top priority posts—but doing so has created gaps elsewhere, including at other hardship posts. For instance, positions that bureaus decided not to fill in the 2009 assignments cycle included several positions at hardship posts, such as an economic officer in Lagos, a management officer in Shenyang, and three or more positions each in Riyadh, Saudi Arabia; Mexico City, Mexico; and Moscow, Russia. State officials also noted that the pressing need to staff Iraq and Afghanistan has led officers serving elsewhere to interrupt or cancel their current tours and volunteer for service in those two countries, thereby leaving other posts with unexpected gaps. For example, a senior official stated that a key political/military officer position in Russia was vacant due to the incumbent volunteering for a year of service in Afghanistan. The senior official further stated that he anticipated it would be difficult to find a temporary replacement for the unexpected vacancy. Similarly, officials in the Bureau of East Asian and Pacific Affairs told us an officer who received nearly a year of language training in Vietnamese cancelled her tour in Vietnam to serve in Iraq. Although State recently received a significant increase in resources and has requested more, the extent to which this influx will allow the department to eliminate vacancies is unclear. State received funding for about 140 additional Foreign Service positions in fiscal year 2008. Subsequently, in fiscal year 2009, State received about 720 additional Foreign Service positions that, according to the department, largely allowed it to fill vacancies created by personnel serving in Iraq and Afghanistan and increases in language training. The department has requested nearly 740 additional Foreign Service positions for fiscal year 2010 that, according to State’s 2010 Congressional Budget Justification, will allow it to begin expanding its presence according to strategic priorities. However, given that about 1,650 positions were vacant as of April 2009, it is unclear if the approximately 1,600 positions received or requested will enable State to both eliminate vacancies and expand its operations as stated. While new resources may enable State to partially address vacancies and the department has reduced its mid-level deficit since 2006, the remaining shortage of mid-level officers represents a continuing experience gap. As of December 2008, State had 85 fewer mid-level generalist officers than positions (see table 2)—an improvement on the deficit of 316 that we previously reported. However, as of the same date, State faced a 28 percent greater deficit at the FS-02 level than it did in 2006, with mid-level positions in the public diplomacy and consular cones continuing to experience the largest shortages of staff overall. According to a senior State official, the department will continue to face a deficit at the FS-02 level until 2012. The official told us that the department plans to manage this experience gap by assigning officers in the FS-03 grade to stretch positions. However, as we discuss later in this report, positions filled by officers in upstretch assignments can compromise diplomatic readiness. State has also accelerated promotions of FS-03 officers to address the experience gap. For instance, State’s Five-Year Workforce Plan for Fiscal Years 2008 through 2012 projects that it will take about 8 years for officers hired in 2008 to be promoted to the FS-02 level. By contrast, officers promoted to the FS-02 level in 2003 had an average time-in-service of 10.7 years. However, according to State, additional acceleration of promotions is unlikely given the potential risks associated with promoting officers with insufficient experience. Although hardship posts have experienced an increase in bidding since we last reported, they continue to have difficulty attracting bids from experienced officers. Figure 2 shows the average number of bids on FS-02, FS-03, and FS-04 positions at overseas posts by differential rate for the 2008 summer assignments cycle. Since our 2006 report, the median average of all bids on hardship posts has increased by about 20 percent (from 5 to 6). The increase has been more pronounced for posts of greatest hardship, which received a median average of 4.5 bids per post in 2008—about 40 percent higher than the median average of 3.2 bids we previously reported. However, hardship posts continue to have difficulty attracting bids from experienced officers. Specifically, positions at hardship posts received a median average of 4 bids from at-grade officers, including a median average of 2.7 at-grade bids for positions at the posts of greatest hardship. By contrast, posts with no or low hardship differentials received a median average of over 9 at-grade bids. Furthermore, as of September 2008, hardship posts comprised over 90 percent (62 of 67) of posts that State classified as historically difficult to staff for 2009. Low bidding on hardship posts exacerbates State’s staffing deficits— particularly its shortage of mid-level consular and public diplomacy officers. Figure 3 shows the average number of bids per generalist career track for each hardship differential in the summer 2008 assignments cycle. While all generalist career tracks received about 3 to 4 times fewer bids at the posts of greatest hardship than at posts with no differentials in 2008, consular and public diplomacy positions received among the fewest bids on average—3.6 and 4.3, respectively. Given that State faces its largest staff shortages in mid-level consular and public diplomacy positions, low bidding for such positions at hardship posts increases the difficulty of filling them. State has taken steps in recent years to prioritize staffing of hardship posts. For example, in the 2007 assignments cycle, State assigned staff to hardship positions it considered critical—including in Iraq and Afghanistan—prior to assigning staff to positions elsewhere. Similarly, in the 2008 assignments cycle, State assigned staff to the posts of greatest hardship before assigning staff elsewhere. However, as we noted earlier in this report, hardship posts face a higher rate of upstretch assignments than posts with no or low differentials—an experience gap that State’s assignment system does not explicitly address. For example, while State’s instructions to bidders for the 2007 and 2008 assignments cycles did emphasize the staffing of hardship positions, the instructions did not differentiate between filling the positions with at-grade officers and filling them with officers below the positions’ grades. Although State’s instructions to bidders clearly state that employees bidding on stretch assignments compete against at-grade bidders, the low number of at-grade bids on hardship positions limits the likelihood that such positions will be filled by at-grade officers. Furthermore, in the assignments cycles for 2007 through 2009, State consistently permitted upstretch assignments to hardship posts 1 to 3 months prior to permitting upstretch assignments to posts with low or no hardship differentials, which may have encouraged officers with less experience to bid on hardship posts. According to State, upstretch assignments can be career-enhancing in some cases; however, the experience gap they represent—particularly at the mid-levels—can compromise diplomatic readiness. Current and former State officials, including recently retired ambassadors and former directors general who participated in a GAO expert roundtable, staff currently posted overseas, and officials in Washington told us that staffing gaps at hardship posts diminish diplomatic readiness in a variety of ways. According to these officials, gaps can lead to decreased reporting coverage, loss of institutional knowledge, and increased supervisory requirements for senior staff, which take time away from other critical diplomatic responsibilities. Senior management at selected posts had concerns that vacant positions caused an increased workload on officers at posts, which may detract from important functions. For example, the economic officer position in Lagos, whose responsibility is solely focused on energy, oil, and natural gas, was not filled in the 2009 cycle. The incumbent explained that, following his departure, his reporting responsibilities will be split up between officers in Abuja and Lagos. He said this division of responsibilities would diminish the position’s focus on the oil industry and potentially lead to the loss of important contacts within both the government ministries and the oil industry. A 2008 Office of Inspector General (OIG) inspection of Freetown, Sierra Leone, noted concern over the effect of a sudden vacancy when the embassy’s sole political/economic officer cut his tour short to serve in Iraq. This vacancy deprived the embassy of its only reporting officer and the resulting transition period caused officials in Washington to be dissatisfied with economic reporting on issues such as the diamond industry and its impact on political instability, money laundering, drug smuggling, and, perhaps, terrorism. Similarly, an official told us that a political/military officer position in Russia was vacant because of the departure of the incumbent for a tour in Afghanistan, and the position’s portfolio of responsibilities was divided among other officers in the embassy. According to the official, this vacancy slowed negotiation of an agreement with Russia regarding military transit to Afghanistan. Another potentially adverse effect of staffing gaps is that important post- level duties, such as reporting and staff development, may suffer from inexperience when entry-level officers are staffed to mid-level positions. While officials at post said that some officers in stretch positions perform well, others told us that the inexperience of entry-level officers serving in mid-level capacities can have a negative impact. For example, the economic section chief at one post we visited stated that reporting produced by an entry-level officer in a mid-level position lacked the necessary analytical rigor. The political section chief at the same post noted that a mid-level position responsible for reporting on terrorism was staffed by an officer serving two grades above his current grade level with no previous reporting experience. A 2008 OIG inspection of N’Djamena, Chad, found that difficulties attracting staff with the requisite skills and experience contributed to deviations from standard operating procedures. Another consequence of staffing gaps is that senior-level staff at posts with no experienced mid-level officers are diverted from key responsibilities by the need to supervise inexperienced entry-level staff. In 2006, we found that senior staff at several posts spent more time on operational matters and less time on overall planning, policy, and coordination than should be the case. On our recent visits, we found that there are still inexperienced officers taking on mid-level responsibilities and that these officers require more supervision and guidance from senior post leadership than more experienced mid-level officers would require; as a result, the senior officers have less time to perform high-level planning and policy implementation. According to officials we met with, inexperienced officers sometimes perform essential tasks such as adjudicating visas, identifying political trends, and assisting American citizens abroad; therefore, they often require guidance on how to carry out such activities. When senior-level officials must serve as the only source of guidance, post officials explained, they have less ability to plan and coordinate policy. For example, the ambassador to Nigeria told us spending time helping officers in stretch positions is a burden and interferes with policy planning and implementation. The consular chief in Shenyang told us he spends too much time helping entry-level officers adjudicate visas and, therefore, less time managing the section. A 2008 OIG inspection of N’Djamena, Chad, reported that the entire front office was involved in mentoring entry-level officers and that this was an unfair burden on the ambassador and deputy chief of mission, given the challenging nature of the post. In addition to gaps in established positions, some State officials at overseas posts told us that there are not enough authorized positions to manage the heavy workload at some posts. These officials stated that even if the department had an adequate number of people to fill all current positions, there would still be a need for additional positions and officers to fill them because the current workload outweighs the workforce. For example, a senior official at one post told us that her embassy did not have enough authorized management positions to support the rapid increase in staff for all government agencies located there. As a result, the ambassador placed a moratorium on the addition of any new staff from any agency until the embassy received more management officer positions. The official explained that the moratorium has prevented some agencies from adding staff to implement important programs related to health, education, and counternarcotics efforts. During the GAO expert roundtable of former ambassadors to hardship posts, a former director general said that one of his former posts had so many visitors that four officers had to deal primarily with visits and not their other responsibilities. In addition, according to the ambassador to Liberia, the embassy in Monrovia lacks adequate staff positions to meet its goals. She said it is not uncommon for one section to work twenty hours of overtime in one week. The ambassador listed four new positions that she believes should be authorized but, according to her, will not likely be added in the next few years. The State OIG also commented on the need for reasonable growth in Monrovia in a 2008 mission inspection. A 2009 OIG inspection of Nouakchott, Mauritania, noted concern that without another political officer in the embassy, the post would not have the depth needed to adequately cover the rapidly evolving political situation and achieve department goals in the country. Similarly, officials in Jeddah, Saudi Arabia, noted that the creation and filling of a political/economic section chief position, as they have requested in their Mission Strategic Plan, would alleviate the current need for entry-level officers to report directly to the consul general. State uses a range of incentives to staff hardship posts, but their effectiveness remains unclear due to a lack of evaluation. Incentives to serve in hardship posts range from monetary benefits to changes in service and bidding requirements. In 2006, we recommended that State evaluate the effectiveness of its incentive programs for hardship post assignments, but the department has not yet done so systematically. Further, recent legislation will increase the cost of existing incentives, thereby increasing the need for State to fully evaluate its incentives to ensure resources are effectively targeted and not wasted. State has created a wide range of measures and financial and nonfinancial incentives to encourage mid-level officers to seek assignments to—and remain at—hardship posts around the world. These have included some measures designed for all hardship posts, as well as others tailored specifically to fill positions in Iraq and Afghanistan, posts State has declared to be the highest priority. In addition to hardship and danger pay, incentives to bid on—and remain in—hardship posts, particularly those considered historically difficult to staff, include: The opportunity to include upstretch jobs on core bid list. Mid-level officers may include bids for upstretch positions in their “core bid” list, provided that the position is at a hardship post or the officer is serving at a hardship post when the bid list is due. State generally requires employees to maintain a list of six “core bids” on positions at their grade level. State often offers upstretch assignments as a reward for strong performance and as a career-enhancing opportunity. Eligibility to receive student loan repayments. Officers who accept assignments to posts with at least a 20 percent hardship differential or any danger pay allowance may be offered student loan repayments as a recruitment or retention incentive. Extra pay to extend tour in certain posts. Employees who accept a 3-year assignment at certain historically difficult to staff posts qualifying for the Service Need Differential (SND) program are eligible to receive an additional hardship differential over and above existing hardship differentials, equal to 15 percent of the employee’s basic compensation. One year of service at unaccompanied or certain difficult to staff posts. State has established a 1-year tour of duty at posts considered too dangerous for some family members to accompany an officer, in recognition of the difficulty of serving at such posts. Additionally, employees may negotiate shorter tours to historically difficult to staff posts, provided it is in the interest of the service. Consideration for promotion. State instructs the selection boards who recommend employees for promotion to “...weigh positively creditable and exemplary performance at hardship and danger posts…” However, the instructions only identify Iraq and Afghanistan by name. State has taken special measures to fill positions in Iraq and Afghanistan, including assigning officers to these two posts before assigning them to other posts. Incentives for officers to serve in Iraq and Afghanistan include: Priority consideration for onward assignments. State has instituted a program whereby a Foreign Service employee may be selected for his/her assignment for 2010 at the same time as he/she is selected for a 2009 Iraq assignment. The option to serve in Iraq or Afghanistan on detail and extend current assignment. State allows officers to serve in Iraq or Afghanistan on detail from Washington or their current post of assignment, which provides financial and other benefits. For example, officers serving on detail from Washington, D.C., retain locality pay. Moreover, according to State officials, officers who leave their families at their current post of assignment to serve on detail avoid the disruption of moving their families and may extend their tour at their current post of assignment from 3 years to 4 years, which may be particularly attractive for officers with school age children as it enables more educational continuity. Favorable consideration for promotion. State’s selection boards that recommend employees for promotion are expected to look favorably on service in Iraq and Afghanistan. In particular, State instructs the boards to “particularly credit performance in Provincial Reconstruction Teams and other regional operations in Iraq, which the President and Secretary of State have determined to be of the highest priority.” In addition to incentives, State has rules requiring certain employees to bid on positions at hardship posts. These Fair Share rules require designated FSOs to bid on a minimum of three posts with a 15 percent or higher differential pay incentive in two geographic areas. Table 3 lists the various incentives and requirements across posts, based on hardship differential. Although State offers a range of incentives, it does not routinely track or report on their total cost. In response to our request for cost information, State queried its payroll system and estimated that it spent about $83 million on hardship pay, $30 million on danger pay, and about $3 million on SND in fiscal year 2008. The cost information indicates that the amount spent on financial incentives has increased in recent years. According to the State OIG, in fiscal year 2005, the department spent about $65 million on hardship pay, $16 million on danger pay, and $3 million on SND. Separately, State reports the amount spent on student loan repayments to the Office of Personnel Management (OPM) as part of that office’s statutory requirement to report annually to the Congress on agencies’ use of student loan repayments. According to our analysis of data from OPM’s report for 2007, State repaid about $2.5 million of student loans to FSOs in that year. Although not all incentives cost money, they may present other tradeoffs. First, State officials report that the 1-year tour of duty to Iraq has been a useful recruitment tool. However, these and other officials told us that the 1-year tour length makes it difficult for FSOs to form the relationships with their counterparts in other governments necessary for the conduct of U.S. diplomacy. For example, a State official told us of a recent instance where the U.S. government needed information on a Middle Eastern country’s relationship with another nation in the region. However, none of the four political officers at the U.S. embassy in the country had sufficient contacts with the host government to obtain the information required. Consequently, the U.S. embassy needed to ask State headquarters to obtain the information from the host government by way of that country’s embassy in the United States, resulting in delayed reporting of the information. A former Director General told us that 1-year tours result in a loss of institutional knowledge and program continuity. Second, the opportunity to bid on stretch assignments is an incentive because such assignments may be career-enhancing. However, as noted earlier in this report, senior officials may need to supervise and guide officers in stretch positions more than officers in positions at their current grade levels. State has not systematically evaluated the effectiveness of its incentive programs, despite recommendations to do so. Agency officials cited the difficulty of evaluating the impact of any single incentive because of the numerous factors involved, but State has not taken advantage of available tools to evaluate incentive programs. State has not generated sufficient data to evaluate the impact of the favorable consideration for promotion and the SND program in attracting employees to bid on, or remain in, hardship post assignments. State also did not comply with a congressional mandate to evaluate recent increases in hardship and danger pay. State’s efforts to evaluate hardship incentives remain insufficient. We previously reported that State created a number of incentives to address the growing number of vacancies at hardship posts to achieve its goal of having the right people in the right place with the right skills. However, in 2006, we reported State had not measured the effectiveness of hardship incentives, and recommended State systematically evaluate the effectiveness of such measures, establishing specific indicators of progress and adjusting the use of the incentives based on this analysis. State responded to this recommendation by including a question on the impact of incentives to its biennial employee quality of life survey, but this step does not fully respond to our recommendation for three reasons. First, the survey’s incentive question is not specific enough. State included the question “How important was each of the following in your decision to bid on overseas positions during the last assignment cycle in which you submitted bids?” in its most recent Quality of Life at Work survey. The question then listed 11 items, some of which are incentives (e.g., hardship pay) and others are generic aspects of overseas assignments (e.g., security). While the survey provides some limited information, the survey question does not ask about the influence of the incentives on officers’ willingness to bid on—and remain in—hardship post assignments. Further, by mixing incentives and other aspects of hardship post assignments, the question dilutes the focus on the incentives. Moreover, the list of incentives included is incomplete. For example, it does not ask employees about the extent to which the opportunity to include upstretch jobs on their core bid list or the favorable promotion consideration by selection boards impact their decisions to bid on hardship post assignments. Excluding some incentives from the survey hampers State’s ability to evaluate the effectiveness of programs for hardship post assignments individually and collectively. Second, the overall survey design has limitations preventing State officials from segregating responses by post and also does not collect key demographic information. For example, the survey data do not allow State officials to determine which responses came from posts with no hardship differential, such as London, United Kingdom, and which came from posts of greatest hardship, such as Lagos, Nigeria. The survey also does not ask respondents for key demographic information, such as age and family status. The absence of this information makes it difficult to assess the effectiveness of the incentives as they apply to posts differently. Further, the appeal of one incentive relative to another incentive may differ based upon an officer’s personal circumstances. Third, State did not establish specific indicators of progress against which to measure the survey responses over time. As previously noted, State tracks the percentage of critical positions filled with qualified bidders by the end of the assignments cycle. However, State has not attempted to link this information to the survey results, as suggested by government management standards. Since the survey incentive question is so vague, tracking it over time would not provide a useful indicator of progress to assess the outcomes of its programs for hardship post assignments. State has not taken advantage of available tools to evaluate incentive programs for hardship post assignments. State officials maintain that external constraints make it challenging to evaluate the department’s incentive programs. They reported that, in their view, it is not possible to isolate the effectiveness of a single incentive because of the large number of factors staff consider when bidding on assignments. Specifically, the department cited the difficulties of capturing the personal and family preferences and values that influence bid decisions in a database. While acknowledging the challenges of this type of analysis, there are statistical methods and procedures to help determine the extent of association between the key variables of interest, while controlling for the effect of other measurable factors that could influence outcomes. Further, cost- effectiveness analysis—which attempts to systematically quantify the costs of alternatives and assumes that each alternative results in achieving the same benefits—can be an appropriate evaluation tool when dollar values cannot be ascribed to the benefits of a particular program. While State has taken steps to improve its data collection effort, it does not collect sufficient information to determine whether the SND program or the instructions to selection boards to weigh service at hardship posts positively are having an impact on bidding on hardship posts. State has increased the amount of data it collects on the SND program since we last reported in 2006, but more information is needed to evaluate the program’s effectiveness. In 2006, we reported State was able to provide information on the number of officers who actually enrolled in the program, but was not able to provide information on the number of eligible officers who did not. Since we last reported on this issue, State has begun collecting data on which officers decline SND. However, State has not gathered the additional information necessary to measure the effectiveness of the program. According to a department official, State has considered the calculation of the worldwide rate at which officers extend their tours of duty to be a lower priority than other human resources initiatives. The State official said that it is not possible to evaluate the program’s effectiveness without this information. The manner in which State tracks employees serving in Iraq and Afghanistan makes it difficult to analyze the impact of the promotion consideration outlined in the instructions to selection boards. As previously noted, officers may serve in Iraq and Afghanistan on detail from Washington or another post of assignment; however, while they are on detail, State’s personnel database continues to reflect the officer’s current post of assignment. Furthermore, we reported in June 2009 that State does not have a mechanism for identifying and tracking its employees deployed to Iraq or Afghanistan and recommended the department establish policies and procedures to do so. The lack of readily available data on FSOs deployed to Iraq and Afghanistan may make it difficult to comply with a June 2009 congressional direction to State that it report on the promotion process at the department as it relates to any preferential consideration given for service in Iraq, Afghanistan, and Pakistan, as compared to other hardship posts. According to officials, State has not yet attempted to analyze the impact of the instructions to the selection boards on promotions. State has not complied with a congressional mandate to assess the effectiveness of increasing hardship and danger pay ceilings to recruit experienced officers to certain posts, hampering oversight of State’s use of the authority to increase such differentials. In December 2005, Congress passed legislation authorizing State to raise the hardship differentials and danger pay allowances from 25 percent to 35 percent as a recruitment and retention incentive. The law required the department to (1) notify several congressional committees of the criteria to be used in adjusting the hardship and danger differentials and (2) study and report by 2007 on the effect of the increases in hardship differential and danger pay allowance ceilings in filling “hard to fill” positions. In response, State notified Congress in March 2006 that it would increase the threshold for posts to qualify for the 30 and 35 percent differentials and allowances under the present criteria it uses to calculate its hardship and danger pay differential calculations, rather than add new criteria. However, State officials confirmed that the department did not study the effect of these increased differentials and allowances on filling “hard to fill” positions and did not provide the required report to Congress. A State official said that, as of July 2009, the department had begun an effort to comply with the congressional mandate. According to State’s comments on this report, the department expects to fulfill the mandate by October 2009. Despite the hardship and danger pay increases, these high-priority posts continue to have difficulties attracting bidders. Specifically, 17 of the 26 posts with either danger or hardship pay differentials above 25 percent were designated historically difficult to staff as of May 2008. The lack of an assessment of the effectiveness of the danger and hardship pay increases in filling positions at these posts, coupled with the continuing staffing challenges in these locations, makes it difficult to determine whether these resources are properly targeted. Several measures passed by Congress this year may raise the cost of hardship post incentives already in place and provide additional incentives. Legislation enacted in 2009 authorized locality pay adjustments for fiscal year 2009 for members of the Foreign Service stationed overseas comparable to that if such member’s official duty station were in the District of Columbia, and appropriated $41 million for this purpose. According to a State official, the legislative change will result in an approximately 8 percent increase in basic pay for FSOs, beginning in August 2009. Locality pay is not itself an incentive for hardship post assignments. However, the resulting increase in basic pay will lead to an increase in hardship pay, danger pay, and SND, all of which are calculated as percentages of basic pay. Officials we interviewed, both at hardship posts and in Washington, D.C., cited the lack of locality pay as a deterrent to bid on overseas positions. We have reported in the past that differences in the statutes governing domestic locality pay and differential pay for overseas service created a gap in compensation, which State officials, the American Foreign Service Association, and many officers have reported effectively penalizes overseas employees compared to employees based in Washington, D.C. Congress also recently enacted legislation authorizing State to pay recruitment, relocation, and retention bonuses to all FSOs other than ambassadors and chiefs of mission who are on official duty in Iraq, Afghanistan, and Pakistan. Previously, Foreign Service generalists were not entitled to receive recruitment, relocation, and retention bonuses. As of the end of fiscal year 2008, there were about 340 Foreign Service generalist positions in Iraq, Afghanistan, and Pakistan. Further, State also plans to increase the number of FSOs in Afghanistan and Pakistan. The large—and growing—number of FSOs serving at these posts represents a potentially significant increase in recruitment, relocation, and retention bonus payments. The conduct of U.S. diplomacy compels State to assign staff to hardship posts where conditions are difficult and sometimes dangerous, but that nonetheless are at the forefront of U.S. foreign policy priorities. State has made progress since 2006 in reducing its deficit of mid-level officers and increasing the average number of bids at hardship posts. Despite these advances, State continues to face persistent staffing and experience gaps at such posts—especially at the mid-level—which can compromise its diplomatic readiness. The department has generally been able to fill its top priority posts in Iraq and Afghanistan, but key positions at other hardship posts remain vacant or are filled by officers who may lack the necessary experience to effectively perform their duties, potentially compromising State’s ability to advance U.S. international interests. Although State plans to address staffing gaps by hiring more officers, the department acknowledges it will take years for these new employees to gain the experience they need to be effective mid-level officers. The department plans to manage this experience gap in the near term by continuing to assign officers to positions above their current grade level. However, the frequent assignment of officers to stretch positions in hardship posts brings some risks, which will likely persist since State’s assignment system does not explicitly address the continuing experience gap at hardship posts as a priority consideration in making assignments. Furthermore, despite State’s continued difficulty attracting qualified staff to hardship posts, the department has not systematically evaluated the effectiveness of its incentives for hardship service. These incentives cost the department millions of dollars annually—an investment that will grow given recent legislative initiatives that raise FSO basic pay and expand the use of bonuses for recruitment, relocation, and retention. Without a full evaluation of State’s hardship incentives, the department cannot obtain valuable insights that could help guide resource decisions to ensure it is most efficiently and effectively addressing gaps at these important posts. To ensure that hardship posts are staffed commensurate with their stated level of strategic importance and resources are properly targeted, we recommend the Secretary of State take the following two actions: Take steps to minimize the experience gap at hardship posts by making the assignment of at-grade, mid-level officers to such posts an explicit priority consideration. Develop and implement a plan to evaluate incentives for hardship post assignments. Such a plan could include an analysis of how the hardship assignment incentive programs work individually and collectively to address the department’s difficulty in recruiting staff to accept—and remain in—positions at hardship posts. State provided written comments on a draft of this report. The comments are reprinted in Appendix IV. State generally agreed with the report’s findings, conclusions, and recommendations. For example, the department acknowledged that many hardship posts may face experience gaps. State also provided us with a draft analysis of the impact of increased hardship and danger pay on staffing shortfalls and indicated that it plans to continue tracking employee attitudes toward hardship incentives through future surveys. While these are positive steps, they do not fully respond to our recommendation to implement a plan to evaluate hardship incentives. In addition, State provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of State and interested congressional committees. In addition, the report will be available at no charge on our Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-4268 or fordj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To assess the Department of State’s (State) progress in addressing staffing gaps at hardship posts since 2006 and the effect of any remaining gaps, we reviewed GAO and State Office of Inspector General reports (OIG), as well as applicable legislation and budget documents; analyzed staffing, bidding, and position data; and interviewed officials in State’s Bureau of Human Resources, Bureau of Consular Affairs, and six regional bureaus regarding staffing issues. To determine State staff surplus/deficit figures, we analyzed State staffing data and compared the number of positions in each career track with the number of Foreign Service Officers (FSO) in each track. For example, if the total number of employees in the consular career track is 1,055 and the total number of consular positions is 1,866, the deficit in officers would be 811. We analyzed bid data from the 2008 summer assignments cycle to determine the average number of bids per post, the median number of bids for each differential rate, and the average number of bids per generalist career track for each differential rate. In order to compare 2008 data with the 2005 data from our previous report and remain consistent, we used FS- 04, FS-03, and FS-02 bid data. The bid data include the number of positions to be filled at each post and the number of bids received for each position. We used the bid data for the summer assignments cycle because, according to State officials, most employees are transferred during this cycle, compared to the winter cycle. Because State staffed Iraq through a separate assignments cycle in 2008 that involved a different bidding process than the regular summer assignments cycle, we did not include Iraq positions in our analysis. We used the following methodology to obtain our results: To obtain the average number of bids per post, we took the total number of bids received on all positions at each post and divided it by the total number of positions to be filled at the post. For example, in the 2008 summer assignments cycle, Lagos had 9 positions to be filled and received a total of 23 bids, resulting in an average of 2.6 bids for this post. To obtain the median number of bids at each differential rate, we arranged in ascending order the average number of bids for each post at the corresponding differential rate and used the middle average. For example, assuming there are 5 posts at the 25 percent differential rate and their average bids are 3, 5, 7, 9, and 16, the median of the average bids is 7. To obtain the average number of bids per generalist career track at each differential rate, we took the total number of bids received on all positions in each career track per differential and divided it by the total number of positions to be filled in the career track per differential. For example, assuming there are 3 management positions at the 15 percent differential rate receiving a total of 12 bids, the average number of bids for management positions at 15 percent differential posts is 4. We also analyzed data on all State Foreign Service positions as of the end of fiscal year 2008 to determine the vacancy rate for each post, the average vacancy rate for each differential rate, and the proportion of mid-level generalist positions filled by officers working above their grades for each differential rate. The position data include the number of positions at each post, the career track and grade of each position and, for positions that are staffed, the career track and grade of the incumbent. We used position data as of the end of the fiscal year because, according to State officials, most employees moving on to their next assignments have arrived at their new posts by that time. Due to limitations in the position data for Iraq, we did not include Iraq positions in our analysis. We used the following methodology to obtain our results: To obtain the vacancy rate for each post, we took the total number of vacant positions at each post and divided it by the total number of positions to be filled at the post. For example, assuming there are 10 total positions at a given post and 2 vacancies, the vacancy rate is 20 percent. To obtain the average vacancy rate for each differential rate, we took the sum of all vacancy rates for posts with a given differential and divided it by the total number of posts with that differential. For example, assuming there are 5 posts at the 25 percent differential rate and their vacancy rates are 10 percent, 12 percent, 15 percent, 17 percent, and 20 percent, the average vacancy rate is 14.8 percent. To obtain the proportion of mid-level generalist positions filled by officers working above their grades for each differential rate, we took the total number of generalist positions at the FS-03, FS-02, and FS-01 levels filled with officers in upstretch assignments for each differential and divided it by the total number of generalist positions at those levels with that differential. For example, assuming there are only 7 mid-level generalist positions at posts with a 20 percent differential and 2 are filled by officers in upstretches, the upstretch rate is 29 percent. To assess the extent to which State has used incentives to address staffing gaps at hardship posts, we reviewed GAO and State OIG reports, as well as applicable legislative documents and guidance from the Office of Personnel Management (OPM) and the Office of Management and Budget; examined surveys conducted by State; analyzed State documents that outline incentives for hardship service, including those available to officers serving in Iraq and Afghanistan; collected data on participation in and funds expended on hardship interviewed officials in State’s Bureau of Human Resources, Bureau of Administration, and six regional bureaus regarding State’s use of incentives. We obtained bidding data from State’s FSBID database and staffing and position data from State’s Global Employee Management System (GEMS) database. Since we have previously checked the reliability of both these databases, we inquired if State had made any major changes to the databases since our 2006 report. State indicated that it had not made major changes to either. We also tested the data for completeness and interviewed knowledgeable officials from the Office of Resource Management and Organizational Analysis and the Office of Career Development and Assignments (HR/CDA) concerning the reliability of the data. Based on our analysis of the data and discussions with the officials, we determined the bidding and staffing data to be sufficiently reliable for our purposes. We also determined that the position data for all posts but Iraq were sufficiently reliable for the purposes of this engagement. Given the limitations associated with Iraq positions in the position data, we obtained a separate set of Iraq-specific position data from the Bureau of Near Eastern Affairs (NEA) to use to analyze staffing in Iraq. To assess the reliability of the Iraq position data provided by NEA, we asked State how the data are collected, entered, and checked. State indicated that the data are collected and maintained manually by authorized assignment personnel and constantly updated through coordination between NEA and human resources officials in Iraq, among others. Based on this assessment and our analysis of the data, we determined NEA’s Iraq position data to be sufficiently reliable for the purposes of this engagement. We conducted fieldwork in Lagos and Abuja, Nigeria; Shenyang, China; and Riyadh and Jeddah, Saudi Arabia, to study the impact of staffing gaps at selected hardship posts and State’s use of incentives for hardship service. In deciding where to conduct our fieldwork, we considered factors such as the historic difficulty of staffing a given post; the mix of incentives available; strategic importance; and recommendations from cognizant State officials. We selected the posts in Nigeria because of their historically low bidding, their 25 percent hardship differentials, and because each offers Service Need Differential (SND). We selected Shenyang because of the post’s 30 percent hardship differential, historically low bidding, and SND. We selected the posts in Saudi Arabia because, in addition to their historically low bidding and 20 percent hardship differentials, both were unaccompanied 1-year posts at the time of our review. In addition to our fieldwork, we conducted telephone interviews with senior officials in several additional hardship posts, including Bangladesh, Cambodia, Liberia, and Tajikistan. We also convened an expert roundtable of several retired senior State officials. The participants in the roundtable had all served as ambassadors to hardship posts in the last 10 years. Two participants were also former directors general. We conducted this performance audit from April 2008 through September 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 4 shows staffing surpluses and deficits by career track for foreign service generalists as of December 31, 2008. Table 5 lists posts that State designated as historically difficult to staff or eligible for Service Need Differential (SND) for the 2009 summer assignments cycle. The following is GAO’s comment to the Department of State’s letter dated September 2, 2009. While State’s analysis of hardship differential and danger pay increases and its request to OPM to include customized questions about hardship incentives in future surveys are positive steps, they do not fully respond to our recommendation to implement a plan to evaluate hardship incentives. State expects to fulfill the mandate to study and report on the effect of the increases in hardship differential and danger pay ceilings in filling “hard to fill” positions in October 2009. However, as noted earlier, State offers other incentives which it has not evaluated. Furthermore, we also note that State's last survey had several limitations. For example, the survey lacked the requisite specificity, included an incomplete list of incentives, and did not collect key demographic information. Unless State addresses these issues, the survey’s utility as an evaluation tool will remain limited. Key contributors to this report include Anthony Moran, Assistant Director; Richard Gifford Howland; Aniruddha Dasgupta; Brian Hackney; Joseph Carney; Martin de Alteriis; Grace Lui; Michael Courts; Zina Merritt; Gloria Hernandez-Saunders; and John Brummet. Technical assistance was provided by Robert Alarapon, Gena Evans, and Thomas Zingale. | The Department of State (State) has designated about two-thirds of its 268 overseas posts as hardship posts. Staff working at such posts often encounter harsh conditions, including inadequate medical facilities and high crime. Many of these posts are vital to U.S. foreign policy objectives and need a full complement of staff with the right skills to carry out the department's priorities. As such, State offers staff at these posts a hardship differential--an additional adjustment to basic pay--to compensate officers for the conditions they encounter and as a recruitment and retention incentive. GAO was asked to assess (1) State's progress in addressing staffing gaps at hardship posts since 2006 and the effect of any remaining gaps, and (2) the extent to which State has used incentives to address staffing gaps at hardship posts. GAO analyzed State data; reviewed relevant documents; met with officials in Washington, D.C.; and conducted fieldwork in five hardship posts. Despite some progress in addressing staffing shortfalls since 2006, State's diplomatic readiness remains at risk due to persistent staffing and experience gaps at key hardship posts. Several factors contribute to these gaps. First, State continues to have fewer officers than positions, a shortage compounded by the personnel demands of Iraq and Afghanistan. Second, while State has reduced its mid-level experience gap, the department does not anticipate eliminating this gap until 2012 and continues to face difficulties attracting experienced applicants to hardship posts--especially posts of greatest hardship. Third, although State's assignment system has prioritized the staffing of hardship posts, it does not explicitly address the continuing experience gap at such posts, many of which are strategically important, yet are often staffed with less experienced officers. Staffing and experience gaps can diminish diplomatic readiness in several ways, according to State officials. For example, gaps can lead to decreased reporting coverage, loss of institutional knowledge, and increased supervisory requirements for senior staff, detracting from other critical diplomatic responsibilities. State uses a range of incentives to staff hardship posts, but their effectiveness remains unclear due to a lack of evaluation. Incentives to serve in hardship posts range from monetary benefits to changes in service and bidding requirements, such as reduced tour lengths at posts where dangerous conditions prevent some family members from accompanying officers. In a 2006 report on staffing gaps, GAO recommended that State evaluate the effectiveness of its incentive programs for hardship post assignments. In response, State added a question about hardship incentives to a recent employee survey. However, the survey does not fully meet GAO's recommendation for several reasons, including that State did not include several incentives in the survey. State also did not comply with a legal requirement to assess the effectiveness of increasing danger and hardship pay in filling certain posts. Recent legislation increasing Foreign Service Officers' basic pay will increase the cost of existing incentives, thereby heightening the importance that State evaluate its incentives for hardship post assignments to ensure resources are effectively targeted and not wasted. |
Reflecting national trends, VA and DOD prescription drug expenditures have increased substantially in recent years and at a much higher rate than their overall health care expenditures. (See figure 1.) In fiscal year 2000, VA purchased about $2.1 billion in pharmaceuticals—$256 million more than in fiscal year 1999—to provide 86 million prescriptions for veterans. In the same year, DOD purchased about $1.14 billion in pharmaceuticals— an increase of $174 million from fiscal year 1999—to provide 54 million military pharmacy and mail-order prescriptions for active duty and retired military service members and their families. Similarly, DOD’s TRICARE retail pharmacy program costs have skyrocketed—averaging 34 percent increases each year since 1995. A number of factors are likely to further drive up pharmaceutical spending, such as a decrease in private insurance pharmaceutical coverage for individuals eligible for VA or DOD benefits. This is particularly so for DOD—as of April 1, 2001, approximately 1.4 million retirees and their dependents received new retail and mail-order pharmacy benefits at an additional cost of about $800 million annually. Since 1997, VA and DOD have each adopted centralized formularies to help ensure that certain drugs are available at all veterans’ and military health care facilities as well as to control pharmacy benefit costs. VA’s national formulary currently lists about 1,100 drugs representing 254 classes, while DOD’s basic core formulary lists 175 drugs in 71 classes.Most of the drug classes in both VA national and DOD core formularies are open—that is, there are no restrictions on provider’s choice of which drug to prescribe for a patient. However, a few drug classes are closed or preferred, meaning that VA and DOD have varying restrictions on providers’ choice of drugs after determining that certain brand name drugs are therapeutic alternatives—that is, interchangeable in terms of efficacy, safety, and outcomes. Having closed or preferred classes allows VA and DOD to competitively award requirements contracts for the lowest-priced drugs. In closed classes, VA and DOD providers must prescribe and pharmacies must dispense the contract drug, instead of therapeutic alternatives, to meet the terms of the contract and guarantee drug companies a high market share. Case-by-case exceptions are allowed, such as those for medical necessity. In preferred classes, VA and DOD providers and pharmacies are encouraged to use the preferred drug but may prescribe or dispense other drugs in the same class without obtaining an exception. Due to the complexity of the care issues and the need to garner clinical acceptance and support, VA and DOD can take as long as a year between the date their respective class reviews establish therapeutic interchangeability of competing brand name drugs and the date a contract is awarded. Generic drug contracts do not require drug class reviews—since competing products are already known to be chemically and therapeutically alike— and, therefore, take less effort and time—about 120 days. VA and military pharmacies use a number of purchasing vehicles to buy prescription drugs at substantial discounts from market prices. (See table 1.) For example, in 1999, about 81 percent of VA and DOD’s combined $2.4 billion in drug expenditures was for drugs bought through the federal supply schedule (FSS) for pharmaceuticals. The remaining expenditures were for purchases associated with the different requirements contracts VA and DOD have with drug manufacturers—each using leverage with manufacturers to achieve the lowest-priced product on its formulary. For nearly 2 decades, the Congress has urged VA and DOD to maximize federal dollars by sharing their health care resources. In May 1982, the Congress enacted the VA and DOD Health Resources Sharing and Emergency Operations Act (P.L. 97-174), which generally encouraged the two departments to enter into agreements to share health care services in existing or newly built health care facilities. In 1996, the Congress began to specifically target cooperation in the purchasing and distributing of pharmaceuticals for the departments’ respective beneficiaries. A 1999 report by a congressional commission concluded that DOD and VA should combine their market power to get better pharmaceutical prices through joint contracts. More recently, the Veterans Millennium Health Care and Benefits Act (P.L. 106-117) required VA and DOD to submit a report on how joint pharmaceutical procurement can be enhanced and cost reductions realized by fiscal year 2004. In January 2001, VA and DOD submitted this report on efforts under way to maximize efficiencies in health care systems. Finally, the Veterans Benefits and Health Care Improvement Act of 2000 (P.L. 106-419) included a provision encouraging VA and DOD to increase to the maximum extent consistent with their respective missions their level of cooperation in the procurement and management of prescription drugs. VA and DOD have made important progress, especially this past year, to increase their joint pharmaceutical procurement activities. By May 2001, the departments expect to have more than doubled the number of joint procurement contracts entered into since our May 2000 testimony. And the departments estimate substantial cost avoidance from current and planned joint procurements—about $170 million per year. VA and DOD’s improved communication and collaboration on these efforts should further enhance their future performance. From October 1998 through April 2000, VA and DOD awarded joint contracts for 18 products, which accounted for about $62 million in combined drug expenditures in fiscal year 2000. (See table 4 in appendix I.) Although these drugs account for just 1.9 percent of the departments’ combined $3.2 billion drug spending in 2000, VA and DOD estimate these joint procurement discounts achieved sizeable cost avoidance—about $40 million in 2000. This is in addition to the significant cost avoidance the departments are already experiencing from their separate contracts. Last year, the departments began developing plans to merge these contracts as they expire and undertook other collaborative actions that will increase the number of joint procurements in the future. In May 2000, we testified that VA and DOD could significantly increase savings with expanded use of joint pharmaceutical procurement, especially for products in high-expenditure drug classes—a number of which we identified. Since that time, the departments have moved to more than double their joint pharmaceutical procurements and the expected financial benefits from these joint activities. As of January 2001, for example, VA and DOD have awarded an additional 12 joint contracts for commonly used generic drugs and are in the process of awarding another 14 joint contracts—including one for a brand name nonsedating antihistamine drug. (See tables 5 and 6 in appendix I.) In 1999, these drugs accounted for about $123 million of combined VA and DOD purchases. The departments estimate substantial discounts from these new joint procurements—an additional $30 million in drug purchasing cost avoidance each year for the 12 contracts already under way—and millions more should stem from the 14 solicitations that are under way. As of December 2000, VA and DOD had preliminarily reviewed the high- expenditure classes that we suggested could provide opportunities for additional joint procurements. As a result, they plan over the next few years to target for joint procurement 112 drugs—which accounted for about $400 million of their combined expenditures in 1999. (See table 2 for the major therapeutic areas and appendix II for details.) Further, VA and DOD plan to propose more joint procurements after they complete their analysis of the suggested classes. Most of these planned procurements are for generic drugs, but some are for brand name drugs. For example, VA and DOD also recently completed class reviews on the therapeutic interchangeability of several brand name drugs used to treat sinus congestion and have found sufficient clinical basis to pursue one or two joint procurements. The departments estimate that discounts from joint procurements in the targeted classes will yield about $100 million in additional annual cost avoidance although they have not yet estimated cost avoidance for some later year procurements. Also, DOD and VA agreed to merge 52 existing VA-only and DOD-only contracts as these contracts expire. (See table 7 in appendix I.) For example, the departments plan to merge their eight separate contracts for brand name drugs used to lower cholesterol, treat gastrointestinal problems, and control high blood pressure. These contracts yielded cost avoidance in excess of $184 million in fiscal year 2000, which will likely increase as the contracts are consolidated. But, the departments have not yet estimated the consequent potential additional cost avoidance. While potential cost avoidance is difficult to estimate— especially given the high variability in drug market competition—it is likely that the more joint procurements VA and DOD enter into, the greater the financial benefits they will realize. Prior to May 2000, VA and DOD had primarily used interagency sharing agreements and work groups to collaborate on joint procurement activities. In 1998, for example, the DOD/VA Federal Pharmacy Executive Steering Committee was established to increase the uniformity and cost effectiveness of drug therapy in their separate health systems, including overseeing joint contracts for high-dollar and high-volume drugs. But the geographical separation of the departments’ key pharmacy policy and acquisition staffs continued to hamper their day-to-day communications on joint drug activities and complicate their working relationships. Since May 2000, however, the departments have sought to remedy this. As a result, their key pharmacy officials now meet regularly at hub locations to discuss and further their joint procurement activities, and have developed a continually updated interagency report on their joint procurement activities. In August 2000, the DOD/VA Federal Pharmacy Executive Steering Committee began meeting regularly in San Antonio, Texas; Falls Church, Virginia; or Chicago, Illinois, to identify drugs or classes for joint contracting and discuss strategies based on ongoing clinical and formulary decisions. VA and DOD pharmacy policy and acquisition center staff also started holding frequent subcommittee meetings to focus on joint procurement issues. Similarly, in July 2000 VA and DOD acquisition center executives and managers began meeting regularly in Philadelphia or Chicago to review progress under a memorandum of agreement (MOA) to combine their buying power, reduce medical materiel costs, and eliminate contracting redundancies. In addition to implementing joint contracting decisions, the MOA provided that the departments also work together to cancel DOD’s distribution and pricing agreements (DAPA) with drug companies by converting them to VA’s FSS prices. Since the May 2000 hearing, a number of issues impeding progress on converting DAPA to FSS prices have been resolved. For example, a needed computer interface was established between the acquisition centers to expedite uploading FSS prices into DOD’s pharmaceutical ordering and purchasing system, and VA agreed to offset its normal surcharge on all FSS sales to military pharmacies. By January 2001, DOD was able to convert its DAPAs and now both agencies use the same FSS prices. As a result of last year’s progress on the MOA, DOD’s acquisition officials expect to reassign some of their employees to work on additional joint pharmaceutical contracting with VA. Finally, VA and DOD’s new Joint Contract Status report, which is maintained by VA’s pharmacy policy staff, details every drug and drug class with combined purchase potential. VA and DOD pharmacy policy and procurement staff use the report to monitor joint procurement progress and track results. The report is continually updated to list all current joint and separate (VA-only and DOD-only) contracts and those potential procurements that are dependent on clinical and formulary decisions. A November 2000 update to the report details about 140 unique drugs and drug classes providing the contracting status for existing joint and separate contracts and, for many of these, estimates of award values and annual cost avoidance. The report also lists proposed and pending joint contracts—including several we had identified earlier as potential candidates—and the time frames for the various procurement stages. Projected time frames for when the departments’ separate national contracts can be merged are also included. Most VA and DOD joint procurements have been for low-cost generic drugs. While these drugs make up a larger share of the departments’ combined drug volume than brand name drugs, brand name drugs make up a far higher share of expenditures. For example, VA’s brand name drug purchases are 36 percent of volume but 91 percent of expenditures.Although in jointly procuring brand name drugs it can be more complex and time-consuming to garner clinical support and provider acceptance on therapeutic interchangeability, the cumulative financial benefit potential is far greater. Along with such inherent difficulties, VA and DOD cite such challenges as differences in their beneficiary populations and formularies that make it difficult for them to jointly procure brand name drugs. However, some of these differences are diminishing, and the departments have already demonstrated in two cases that they can jointly procure brand name drugs and still meet their unique clinical and administrative needs. Also, the departments can take such actions as periodically seeking input from experts and providing annual reports to the Congress on their joint procurement activities to help enhance their efforts to address these challenges. According to VA and DOD officials, several aspects of their health care systems create challenges that limit their opportunities to jointly procure brand name drugs: VA and DOD officials cite differences in their patient populations—VA serves mostly older men, while DOD also serves younger men and women and children—as shown in figure 2. They said that the different populations result in dissimilar patterns of drug use and demand among their respective beneficiaries, resulting in fewer opportunities to combine drug requirements and solicit joint contracts. VA and DOD officials told us that differences in the scope of their national formularies also limit opportunities for joint drug procurements. VA’s national formulary currently lists about 1,100 drugs for inpatient and outpatient care representing 254 classes, while DOD’s basic core formulary lists 175 drugs for outpatient care in only 71 classes. Also, DOD’s military pharmacy formularies currently limit the drugs available to beneficiaries seeking them whereas its TRICARE retail formularies are unrestricted such that virtually any drug can be obtained. DOD officials are concerned that joint contracts for particular brand name drugs would further restrict drug choice at military pharmacies, which, in turn, could cause beneficiaries to use the retail pharmacies for their drugs. This could drive up DOD’s overall pharmacy costs because its contractors’ drug costs are greater than its discounted military pharmacy drugs. VA and DOD officials are also concerned that closing some classes would be clinically unacceptable for certain populations or individuals with certain conditions. For example, VA and DOD have been reluctant to seek joint contracts for orally inhaled corticosteroids (to treat asthma) because some DOD clinicians would not accept limiting drug choices in the oral inhaler class for clinical reasons, such as the special needs of children. Similarly, VA and DOD clinicians said they would not accept closing the selective serotonin reuptake inhibitor (SSRI) antidepressant class because they already have many patients maintained on one SSRI, and switching their SSRI drug therapy could have adverse treatment effects. Finally, DOD is concerned that its limited control of private provider prescribing practices could result in significant costs to educate and persuade these providers to prescribe drugs contracted under joint procurements. Unlike VA beneficiary prescriptions, which are all written by VA providers and dispensed by VA pharmacies, DOD beneficiary prescriptions are written by both military and private providers and dispensed by both military and retail pharmacies. In fiscal year 2000, about half of the 52 million prescriptions filled by military pharmacies were written by private providers and TRICARE retail pharmacies filled 12 million prescriptions for DOD beneficiaries. Over the past decade, DOD’s patient profile and drug demands have become more similar to VA’s. DOD retirees now make up over 50 percent of DOD’s beneficiary population—a trend that is projected to continue— and account for most of DOD’s drug costs. In fiscal year 2000, close to 70 percent of military pharmacies’ drug costs was for retirees’ prescriptions. (See figure 3.) Further, DOD’s pharmacy benefits for 1.4 million retirees 65 and older expanded in April 2001, which will add an estimated $800 million dollars per year to DOD’s pharmacy expenditures. A significant portion of VA and DOD’s combined drug expenditures is already spent on drugs in classes used primarily to treat older patients. For example, in 1999, 8 of the top 10 high-dollar drug classes in each department were the same. (See table 3.) Most of the matching therapeutic classes are widely used to treat health conditions common to the elderly: high blood pressure, depression, ulcers, diabetes, and high cholesterol. As DOD’s older beneficiary population continues to increase, the use of drugs in these and similar classes and their related expenditures will increase as well. DOD and VA are expected to revise their formularies, which could increase the number of closed and preferred drug classes used in their health care systems. The larger their formularies, the greater the chance they will overlap and provide the two departments more opportunities to jointly procure brand name drugs. Recent legislation has prompted DOD to make plans to increase the number of drugs on its basic core formulary. In 1999, the Congress enacted legislation requiring DOD to establish a preferred drug formulary by October 2000, applicable to both military pharmacies and TRICARE retail and mail-order pharmacies. DOD missed this deadline and is developing regulations to implement this requirement later this year. The legislation also allows DOD to develop and implement a tiered retail and mail-order pharmacy copayment system that creates financial incentives for beneficiaries to use less costly formulary brand name and generic drugs. Once implemented, DOD beneficiaries would have full access to nonformulary brand name drugs but would be financially encouraged to choose less costly formulary brand name drugs available for free at military pharmacies or at lower out-of-pocket costs through mail-order or retail pharmacies. TRICARE contractor representatives told us that a uniform formulary—one that applies to both military and TRICARE pharmacies—and adequately tiered retail and mail-order pharmacy copayments are critically needed to help them and DOD better manage pharmacy benefit costs by steering use to less costly drugs. In addition, the Congress enacted legislation in 2000 requiring DOD to allow beneficiaries age 65 and older access to its retail and mail-order pharmacy benefits—in addition to their continued eligibility to use military pharmacies to obtain free medications. For the first time starting in April 2001, all beneficiaries are eligible for DOD’s comprehensive pharmacy benefits at the same copayment rates. However, DOD’s retail and mail- order pharmacies are comparatively more costly sources for the same drugs than its military pharmacies. (See figure 4.) An expanded basic core formulary would encourage all beneficiaries to obtain more of their prescriptions at the military pharmacies. VA is also revising its formulary management processes and will continue to change its formulary based, in part, on our earlier reviews and a study by the Institute of Medicine (IOM). The IOM study was done in response to congressional concerns that VA’s formulary may have been overly restrictive, with potentially negative effects on health care cost and quality. IOM’s study dispelled such concerns, concluding that VA was justified in creating its formulary and that well-managed formularies are a key part of modern health systems having positive effects on cost and quality. IOM recommended in part that VA continue to prudently establish closed and preferred classes on its formulary and to use more contracts to carefully limit drug choices in more classes, based on quality and cost considerations. As VA’s and DOD’s formularies continue to evolve, the number of overlapping classes should increase, providing more candidates for joint brand name drug contracts. VA and DOD have recently demonstrated in a few cases that, with flexible arrangements, they can procure brand name drugs at maximum discounts, while still allowing one or both departments to preserve drug choice. In August 2000, VA and DOD solicited bids for a joint procurement for one of two nonsedating antihistamines (NSA)—loratadine (Claritin) and fexofenadine (Allegra). To address a DOD concern and ensure that DOD beneficiaries would not have to change their current medications, the solicitation specifies that DOD beneficiaries already using an NSA would not have to switch if the departments jointly contracted for the other drug. Military pharmacies will only have to dispense the contracted drug for new patient prescriptions. For the nicotine patch class (for smoking cessation), VA and DOD have awarded a joint contract that requires only those VA and DOD facilities offering smoking cessation programs to use the contracted drug. Simply adding the contracted product to their formularies would have required VA and DOD facilities without such programs to stock the patches. The joint procurement allowed VA and DOD to realize an estimated $2.4 million in annual cost avoidance. For the angiotensin converting enzyme inhibitor (ACEI) and calcium channel blocker classes, DOD and VA have awarded contracts for preferred formulary drugs without closing the classes. While these contracts encourage providers to prescribe less costly contracted drugs for their patients, providers are free to prescribe noncontracted drugs without having to justify medical necessity. These contracts have resulted in an estimated $13 million in annual cost avoidance. For the leutinizing hormone-releasing hormone (LHRH) class of anticancer drugs, DOD negotiated a blanket purchase agreement (BPA) to receive the same price as VA’s contract price for Zoladex—a 33 percent discount off old prices. In return, DOD has agreed to the preferential use of Zoladex to treat a subset of DOD’s population—adult prostate cancer patients. However, the BPA does not limit providers’ choice in prescribing LHRH drugs for women and children—a clinical concern that had caused DOD to avoid closing this class. DOD’s preferential use of Zoladex should achieve substantial cost avoidance. VA’s separate national contract on Zoladex—which closed the class on VA’s formulary—is achieving an estimated $22 million in annual cost avoidance. In addition, if VA and DOD determine that joint contracting for certain classes is not advantageous, they can use joint BPAs to achieve greater discounts without the more stringent use and time commitments required under a contract. For example, drugs under a joint BPA could be assigned preferential status on the departments’ formularies to encourage—but not require—providers to use the drugs. Competing drugs could also have equal status under multiple joint BPAs rather than closing a class. For example, VA negotiated discounts for SSRI antidepressants with three drug companies under individual BPAs. These BPAs were subsequently extended to DOD. Unlike contracts, BPAs do not require long-term commitments. VA, DOD, or the manufacturer can terminate BPAs with 30 days’ notice. While joint BPAs may not always realize the deep discounts provided under joint contracts, they could reduce costs nonetheless. DOD can work with its TRICARE managed care support contractors to encourage nonmilitary providers to prescribe the contracted drugs included in DOD’s developing uniform formulary as well as inform beneficiaries about the cost benefit to them. About half of the 52 million prescriptions dispensed by military pharmacies in fiscal year 2000 were written by nonmilitary providers treating DOD beneficiaries. DOD’s TRICARE contractors have large, nationwide networks of providers; they also administer benefits and pay claims to non-network providers caring for DOD beneficiaries. Contractor representatives told us that they could disseminate key information about DOD’s uniform formulary, once it is developed and implemented, on their provider Web sites and provide beneficiaries with formulary pocket cards to take along on their medical appointments. Patients would also be motivated to use drugs on the formulary because such use reduces their out-of-pocket costs. Other managed care pharmacy experts told us that these types of outreach efforts are a necessary and routine part of pharmacy benefit management. According to these experts, the additional administrative effort and cost to reach out to providers and beneficiaries will be more than offset by the financial benefits of less costly drug procurement and utilization. In our view, periodic expert input and congressional review could help sustain the important progress VA and DOD have made to address the challenges they face in jointly procuring drugs. While various experts in managed care pharmacy—including several responsible for the IOM study—agreed that the differences in VA’s and DOD’s demographics and health systems are not insurmountable obstacles to joint procurements, they were generally sympathetic to the clinical and operating challenges ahead as the departments continue to expand their efforts. Also, several experts told us that the departments’ efforts might be enhanced by periodically conferring with private managed care pharmacy experts in order to exchange information, experiences, and lessons learned that are relevant to the departments’ joint procurement plans and efforts. External reporting could also help bolster VA and DOD’s efforts to enhance their joint procurement activities—a general finding we reported to the Congress in May 2000. At that time, we recommended that the departments provide information to the Congress on their resource sharing activities—including initiatives such as joint purchasing of pharmaceuticals—to help the Congress and the departments weigh the advantages of such joint activities from a federal perspective rather than from each agency’s standpoint. Moreover, as part of this reporting, VA and DOD could provide details on their ongoing and planned joint procurements relative to the departments’ top-ranking drug classes by volume and expenditures. Also, they could report on the proportion that joint procurements represent of the departments’ combined pharmaceutical expenditures and volume, including the annual cost avoidance due to joint procurements. Such reporting would help facilitate congressional oversight of the departments’ efforts to increase their cooperation in the procurement and management of prescription drugs, which has been legislatively encouraged. VA and DOD have also made important progress in their efforts to conduct a DOD CMOP pilot for evaluating the merits and feasibility of using CMOP centers systemwide. In our May 2000 testimony, we suggested that DOD consider using VA’s highly efficient CMOPs to reduce its dispensing costs. In January 2001, DOD determined that it is feasible to develop the necessary computer interface between military pharmacies and CMOP centers, but other pilot details—including time frames for its implementation—have not yet been developed. If funded and done promptly, the pilot would provide VA needed lead time to plan for and begin building new CMOP facilities to accommodate DOD’s workload in the event that DOD decides to use CMOPs systemwide. In recent years, pharmacy officials have considered various options for moving DOD’s 23 million per year refill prescription workload out of military pharmacies, including using VA’s CMOP centers. VA has realized significant financial and operating benefits by using its seven CMOP centers to handle its refill prescription workload instead of using VA hospitals. (See appendix III for a description of VA’s CMOP operations.) In May 2000, we testified that DOD’s use of VA’s CMOP centers likewise could reduce drug dispensing costs and provide other operating benefits. DOD generally agreed with this proposition and with the proposed pilot test to use CMOPs to develop information on such matters for potential cost avoidance. Also by using CMOPs, DOD would likely achieve operating benefits similar to those realized by VA. For example, CMOP automated technologies have enabled each full-time CMOP employee to dispense between 50,000 and 100,000 prescriptions annually compared to about 15,000 prescriptions dispensed by VA’s pharmacy employees. Using CMOP centers to boost the efficiency of DOD’s refill process might help offset the shortages of qualified pharmacists and other staff at its military pharmacies. DOD also expects that by freeing up its military pharmacists from the labor-intensive task of dispensing prescriptions, they would have more time to work with medical staff and patients toward safer, more effective drug use. CMOP centers also have the benefit of ensuring quality—with their bar-code technology, they have achieved a near error-free dispensing rate. Other potential benefits from using VA’s CMOP centers include customer service. By reducing military pharmacies’ refill workload, pharmacists would have more time to fill initial prescriptions and thus reduce customer waiting times. Beneficiaries have the convenience of receiving refills by mail rather than picking them up at military pharmacies. After conducting an assessment of the costs and time required to develop a computer interface between DOD’s military pharmacies and VA’s CMOP centers, DOD plans to seek funding for the project. However, DOD and VA have not developed plans for how or when to address other significant operational and financial issues that must be worked through to ensure a successful pilot program. DOD had several concerns in deciding whether to conduct a CMOP pilot with VA. Primary among these concerns was determining the costs and time needed to develop an interface that would allow DOD to electronically transfer millions of refill prescriptions from its military pharmacies to the CMOP centers and allow the centers to confirm the status of each refill. In January 2001, DOD, in consultation with VA, completed a preliminary review of the information technology requirements and determined that this effort should take about 9 months of work by pharmacy information technology specialists and cost roughly $640,000. DOD’s pharmacy programs director told us that, considering the reasonableness of the cost and time estimates, establishing a DOD-CMOP interface is no longer considered a major obstacle and that he is seeking internal funding for the interface. According to VA and DOD, other significant operational and financial issues will need to be worked through if DOD decides to adopt CMOP use systemwide. For example, VA would have to plan for and build the equivalent of two new CMOP centers to accommodate DOD’s estimated refill mail-out workload of more than 20 million prescriptions. According to VA officials, the two new CMOP centers for DOD would require 2 to 3 years to build and cost about $27 million. Yet both DOD and VA officials agree that such costs could be significantly reduced if existing VA- or DOD-owned building space could be retrofitted for CMOP’s high- technology equipment and production lines. Unused warehouses and aircraft hangars, for example, might have the 75,000 square feet of open floor space VA’s CMOP design requires. Another DOD concern is that adopting CMOP use could adversely affect military medical readiness. If, for example, DOD’s prime vendors’ drug sales to military pharmacies are reduced with CMOP use, then surcharge revenues generated by such sales and used for medical logistics and readiness planning would likewise be reduced. As we testified in May 2000, this concern could be addressed if DOD’s prime vendors directly supply the CMOPs with drugs needed to fill DOD beneficiaries’ prescriptions—but the departments need to decide on a mutually acceptable course of action. VA and DOD officials told us that an interagency memorandum of understanding or sharing agreement would need to be established to do the pilot program and address these and other joint operational and financial concerns. Such an agreement would cover the various details governing DOD’s use of VA’s CMOPs for processing and mailing-out military pharmacy refill prescriptions to DOD beneficiaries. For example, officials anticipate that an agreement will include provisions to accommodate DOD’s medical readiness concerns. However, DOD and VA have not established time frames for addressing the remaining issues in order to finish planning so that the pilot can begin. VA and DOD’s existing sharing agreement governing their joint pharmaceutical and related medical procurement activities could be used for the CMOP pilot. Signed in 1999, the MOA provided for such future joint department activities by executing and adding appendixes to spell out mutual commitments and responsibilities. Alternatively, a new agreement could be drawn up for the CMOP activity. VA and DOD have made important progress, particularly this past year, in their collaborative efforts to jointly procure drugs to help control spiraling prescription drug costs. Their awarded joint contracts and planned joint procurements are expected to reduce the departments’ total drug costs by almost $170 million a year. This is in addition to significant cost avoidance under the departments’ separate contracts—cost avoidance that will likely increase as the contracts are combined in the future. While their joint procurement efforts have been impressive, to date the departments have largely targeted generic drugs, which make up less than 10 percent of their combined expenditures. More dramatic cost reductions could be realized through procurements of high-cost brand name drugs, although in doing so, it may be more complex and time-consuming to garner the necessary clinical support and provider acceptance on therapeutic interchangeability. Nonetheless, DOD’s greatly expanded retiree drug benefit, and both departments’ developing formularies should provide added joint procurement opportunities for such drugs. In particular, DOD needs to complete development of a uniform formulary of preferred drugs among its health system’s pharmacy sources to better manage and control drug use. Also, the departments’ have demonstrated that flexible approaches to developing joint solicitations can take into account differences in their health systems while still maximizing drug discounts. And DOD can work with the TRICARE contractors to help influence nonmilitary providers and their patients to use contracted drugs. This will become particularly important once DOD develops its uniform formulary of preferred drugs. In our view, their joint activities could be further enhanced by periodically conferring with private managed care pharmacy experts and reporting to the Congress on their joint procurement activities. DOD and VA need to ensure that high-level attention remains focused on their joint drug procurement and distribution activities as leadership changes under the new administration occur at the departments. In the same regard, VA and DOD have also made progress in their efforts to conduct a CMOP pilot. DOD’s use of VA’s CMOPs to handle its large prescription refill workload would result in drug dispensing cost reductions and better use of limited resources. To accelerate the pilot, however, VA and DOD need to develop an action plan with formal commitments. The sooner the pilot proves feasible, the sooner DOD can begin to realize the financial and quality of care benefits associated with the transfer of its refill workload. In view of the leadership changes under way at DOD and VA, we recommend that the departments sustain the momentum made this past year by jointly procuring all brand name and generic drugs for which such procurement is clinically appropriate and cost effective. Also, to help build on the departments’ progress with joint drug procurement and distribution activities, we recommend that the Secretaries of Defense and Veterans Affairs ensure that the Acting Assistant Secretary of Defense (Health Affairs) and VA’s Under Secretary for Health take the following actions: as part of the departments’ annual reporting to the Congress on resource sharing activities, provide information on ongoing and planned joint procurements—including the volume and expenditures relative to the departments’ top-ranking drug classes and total drug expenditures and the consequent annual cost avoidance—as well as on progress toward implementing a CMOP pilot; consider the benefits of periodically conferring with private, managed care pharmacy experts to exchange information, experiences, and lessons learned that could be relevant to the departments’ joint drug procurement activities; and work together to move ahead promptly on the CMOP pilot and develop an interagency agreement governing the pilot’s operation, including actions needed to provide added CMOP capacity should DOD decide to use the CMOPs systemwide. To further mitigate the remaining challenges to joint drug procurement that are unique to the military health care system, we recommend that the Secretary of Defense ensure that the Acting Assistant Secretary of Defense (Health Affairs) take the following actions: complete the development and implementation of a uniform formulary of preferred brand name drugs applicable to military hospital, TRICARE retail, and mail-order pharmacy programs, including the use of tiered retail and mail-order pharmacy copayments to encourage providers and beneficiaries to use formulary drugs; and work with TRICARE contractors to better inform DOD nonmilitary providers and their patients about the uniform formulary in order to encourage providers to prescribe and beneficiaries to use less costly formulary drugs throughout the military health care system. DOD and VA reviewed and separately commented on a draft of this report. Each concurred with the report and its recommendations. The departments stated their commitment to sustaining and building on the progress already made in jointly procuring drugs whenever clinically feasible and cost effective and in their drug distribution activities. The departments agreed, moreover, to annually report to the Congress on the status of their joint drug procurements and the CMOP pilot and to periodically confer with private, managed care pharmacy experts to exchange information and lessons learned relevant to their joint procurement activities. In particular, VA stated that a DOD/VA meeting with private managed care pharmacy representatives and buying groups will take place by November 2001 to discuss strategies for procuring pharmaceuticals. The departments also stated their intention to move ahead promptly on the CMOP pilot and finalize an interagency agreement. VA anticipates this would be completed by July 2001. According to the departments, the agreement will outline plans and actions needed should DOD decide to use VA’s CMOPs nationwide. Also, DOD has funded and expects to complete by March 2002 its work to establish a computer interface between a military pharmacy and a CMOP. Once the interface is developed, a pilot between a yet-to-be-designated military pharmacy and a VA CMOP is targeted to begin in March 2002, according to VA. Lastly, DOD agreed to complete the development of a uniform formulary of drugs applicable to its military hospital, TRICARE retail, and mail order pharmacies. DOD also agreed to work with the TRICARE contractors to encourage DOD’s nonmilitary providers and their patients to use the preferred, less costly formulary drugs. The full texts of the departments’ comments are reprinted as appendixes IV and V. We are sending this report to the Honorable Donald H. Rumsfeld, Secretary of Defense; the Honorable Anthony J. Principi, Secretary of Veterans Affairs; appropriate congressional committees; and other interested parties. We will also make copies available to others upon request. Should you have any questions on matters discussed in this report, please contact me at (202) 512-7101. Other contacts and staff acknowledgments are listed in appendix VI. As reported in May 2000 testimony, from October 1998 through April 2000, VA and DOD awarded 18 joint contracts—mostly for generic drugs. If not for these contracts, VA and DOD estimate that these purchases could have cost $102 million in fiscal year 2000. Instead, actual costs were $62 million—about 1.9 percent of the departments’ combined $3.2 billion drug spending in fiscal year 2000—an overall cost avoidance of 39 percent. Table 4 presents information on the 18 contracts. Since our May 2000 testimony, VA and DOD have more than doubled the number of joint procurements. That is, from May 2000 through April 2001, VA and DOD awarded or were soliciting 26 joint contracts. This includes joint procurements for 25 generic drugs and one brand name antihistamine drug. Based on our analysis of VA/DOD 1999 pharmaceutical purchase data, these 26 drugs amounted to about $123 million in combined drug expenditures. Tables 5 and 6 present information on the 26 joint contracts and solicitations. Over the last several years, DOD and VA have awarded separate contracts for many different pharmaceuticals and related supplies. The following table provides information on the separate contracts that the departments are planning to merge or combine as they expire. In December 2000, DOD and VA pharmacy officials completed their preliminary review of drugs in the high-expenditure classes we had suggested as future candidates for joint procurement. Table 8 is a list of the drugs for which the departments told us they plan to pursue joint procurements in the near future, as well as their data on expenditures and estimates of annual cost avoidance that would stem from the procurements. VA and DOD pharmacy officials told us that they may identify additional drugs and classes for joint procurement once they complete their reviews. VA was the first organization in the United States to deliver prescription medications to patients on a large scale by mail. After World War II, this service was started as a convenience to disabled, homebound veterans. By 1992, nearly all of VA’s outpatient pharmacies provided mail service, but consolidation of mail prescription workloads from multiple VA hospitals into centralized operations had only been initiated on a limited basis. In 1994, the first CMOP at Leavenworth, Kansas, began processing high volume mail prescription workloads using an integrated, automated dispensing system. Since that time, VA has expanded the program to include a total of seven CMOPs located in Leavenworth, Kansas; Los Angeles, California; Bedford, Massachusetts; Dallas, Texas; Murfreesboro, Tennessee; Hines, Illinois; and Charleston, South Carolina. In fiscal year 2000, those facilities processed about 50 million prescriptions. Patients are provided care by VA hospitals or clinics with new prescriptions being dispensed directly from those hospitals or clinics. Patients’ refill prescription requests are received by telephone or in person and processed at the individual VA sites daily. Once processed, the refill prescription orders are sent electronically from multiple VA medical facilities to a CMOP for processing. The CMOP dispenses the pharmaceuticals as specified by the participating medical facility, delivers the completed prescriptions directly to the patient by mail, and returns the dispensing data to the participating hospital or clinic electronically. A patient contacts the hospital or clinic directly if there are any questions or problems. According to VA, the CMOP model takes advantage of economies of scale for mail prescription processing and distribution while at the same time preserving the patient-provider relationship. VA data show that productivity at the CMOP is at levels between 50,000 and 100,000 prescriptions per year per full-time employee. According to VA, such productivity rates are several times greater than traditional hospital and clinic systems. Patients generally receive their medications by mail within 4 days of their orders going from the VA medical facility to a CMOP. CMOPs charge VA medical facilities to recover direct operating costs to purchase pharmaceuticals and related supplies as well as to dispense, package, and mail prescriptions to patients. According to VA documents, in fiscal year 2000, the nondrug CMOP cost charged to VA medical facilities averaged $2.00 per prescription and the CMOP drug cost charged averaged $20.33 per prescription. For each prescription, the nondrug cost charged included $0.77 in personnel costs, $0.40 in operating costs, and $0.83 in mailing costs. For fiscal year 2000, the CMOP workload was about 50 million prescriptions of about $1 billion in drug products and $100 million in nondrug expenses. VA’s business plan for the CMOPs includes performance improvement measures for prompt delivery, accurate dispensing, properly packaged prescriptions, safe work environments, reliable and appropriate equipment, right supplies of drugs on hand, and customer satisfaction. Also, using barcode technology in the automated dispensing process and other quality steps, the CMOP program has achieved an overall accuracy rate of 99.99 percent—which means getting the right drug—in the correct dosage, with the correct instructions—to the right people. In addition, the CMOPs are fully accredited by the Joint Commission on the Accreditation of Health Care Organizations. In addition to those named above, the following staff made key contributions to this report: William Lew, Allan Richardson, Karen Sloan, and Richard Wade. | The Department of Veterans Affairs (VA) and the Department of Defense (DOD) have made important progress, particularly during the past year, in their efforts to jointly procure drugs to help control spiraling prescription drug costs. Although their collaborative efforts have been impressive, the two agencies have largely targeted generic drugs, which comprise less than 10 percent of their combined expenditures. More dramatic cost reductions could be achieved through procurements of high-cost brand-name drugs, although doing so can be more complex and time consuming to garner the necessary clinical support and provider acceptance on therapeutic interchangeability. Nonetheless, DOD's greatly expanded retiree drug benefit and the formularies being developed by both agencies should provide added joint procurement opportunities for such drugs. Also, VA and DOD have shown that flexible approaches to developing joint solicitations can take into account differences in their health systems while still maximizing drug discounts. In GAO's view, their joint activities could be further enhanced by periodically conferring with private managed care pharmacy experts and reporting to Congress on their joint procurement activities. Top management at DOD and VA need to stay focused on their joint procurement and distribution activities as leadership changes continue at the two agencies. VA and DOD have also made progress in their efforts to conduct a consolidated mail outpatient pharmacy pilot. The sooner the pilot proves feasible, the sooner DOD can begin to realize the financial and quality of care benefits associated with the transfer of its refill workload. |
EULs are typically long-term leases of federal land or buildings to public sector or private sector companies. Some agencies with EUL authority are authorized to accept in-kind consideration, such as improvements to agency properties or construction of new facilities in place of cash rent. There is no government-wide definition of an enhanced use lease and agencies’ EUL authorities and guidance vary, as these examples illustrate: VA was authorized to enter into EULs for up to 75 years with public and private entities for leases that contributed to VA’s mission and would enhance the use of the property for cash or in-kind consideration; however, this authority expired on December 31, 2011. Prior to the expiration of VA’s EUL authority, VA entered into 92 EULs that remain active. In August 2012, VA’s EUL authority was reauthorized through December 2023, but the current authority allows VA to enter into EULs up to 75 years only for the provision of supportive housing for veterans or their families that are at risk of homelessness or are homeless. VA may accept cash consideration, or it may enter into an EUL without receiving consideration, and it is prohibited from entering into leasebacks. VA may not enter into an EUL without advanced written certification from OMB that the lease complies with the statutory requirements. VA reports annually on the details, benefits, and costs of its EUL program. The annual report states that it gives a transparent view of the measureable outcomes of the cost-effective benefits to veterans that the EUL program provides. NASA is authorized to enter into EULs of agency properties for cash consideration or, if the EULs involve the development of renewable energy production facilities, in-kind consideration. NASA may not enter into leasebacks. NASA policy requires that EULs relate to and support the agency’s mission of research, education, and exploration. The agency’s longest EUL term is 95 years. NASA’s EUL authority expires in December 2017. NASA reports annually to Congress on its EUL program’s status, proceeds, expenditures, and effectiveness. State is authorized to enter into EULs for its properties acquired in foreign countries for diplomatic and consular establishments.longest EUL term is 99 years and expires in 2090. According to State officials, the agency does not have a formal EUL program. It has only State’s utilized EULs in three instances. State uses EULs on a case by case basis when directed by Congress to retain properties or when it does not consider disposal a desirable option due to the strategic or historic value of an asset. For example, State was required to retain the Palazzo Corpi building in Istanbul, Turkey.its property inventory and monitors the transactions and the cash flows but does not report externally on its EUL program. State carries the EULs in USDA is authorized to demonstrate whether enhanced use leasing of agency real property at its Beltsville Agricultural Research Center and the National Agricultural Library for cash consideration will enhance the use of the leased property. The authority requires that EULs be consistent with the USDA’s mission and have terms no longer than 30 years. USDA’s EUL authority expires in June 2013. USDA reported to Congress on the management and performance measures associated with its EUL demonstration program and is required to report on the success of the program upon completion in 2013. Table 1 shows how the four agencies we reviewed used EULs. OMB coordinates and provides guidance on federal real property management government-wide in its role as Chair of the Federal Real Property Council, which is composed of federal real property-holding agencies. For example, OMB Circular A-11 provides general guidance on evaluating the performance of federal programs and on the budgetary treatment of federal leases, including EULs and leaseback arrangements. OMB’s guidance does not provide specific information about the treatment of EULs, but does require EULs with leasebacks above certain threshold amounts be submitted to OMB for their budgetary-scoring impact. OMB’s instructions also outline how budget authority for the cost of leasing an asset is to be recorded in the budget, depending on how risk is shared between the government and the lessee, for three types of leases: operating leases, capital leases, and lease purchases. Agency officials told us that EULs provide a variety of benefits to the government in addition to better utilization of underutilized federal property. The commonly cited benefits include enhanced mission activities, cash rent revenue, and value received through in-kind consideration. Officials from the four agencies we reviewed said that EULs contribute to their ability to conduct mission-related activities; for example: VA officials said that EULs provide the agency mission-related benefits such as veterans’ priority placement for housing. For example, according to VA, its EUL with Vancouver Housing Authority in Washington to develop a previously vacant site at a VA medical center campus supports the agency’s strategic goals of (a) eliminating homelessness among veterans by providing housing and (b) reducing its inventory of vacant and underutilized capital assets. NASA officials said that EULs provide the agency mission-related benefits, such as research and development of aerospace technologies. For example, according to a NASA official, NASA’s EUL with a company that researches and develops battery systems for electric vehicles advances the agency’s mission of developing new power and propulsion systems for vehicles used in space launches. State officials said that EULs provide mission-related benefits by allowing the department to maintain properties symbolic of U.S. history and diplomacy. For example, State declared the historically significant Talleyrand building in Paris excess (see fig. 1) but chose not to dispose of it because the building had served as the administrative headquarters for the Marshall Plan, the postwar American reconstruction plan for Western Europe. According to State Department officials, State’s EUL lessee supports the agency’s mission by maintaining the building and retaining space inside of it for the George C. Marshall Center including a permanent exhibit commemorating the Marshall Plan. USDA officials said that the agency’s EUL program allows it to better utilize property while also collaborating with researchers on mission- related goals. For example, USDA officials told us that its EUL of greenhouse space at its Beltsville Agricultural Research Center has allowed the agency to advance its mission of developing more efficient crops because the lessee conducts research at the EUL site directly linked to this goal. According to USDA officials, each EUL lessee is required to have a formal collaborative research agreement with the agency. All four agencies we reviewed reported cash benefits from EULs. Individual EULs can generate millions of dollars for the federal government, but most EULs generate small amounts of cash revenue. For example, the average VA EUL generated about $25,000 in cash revenue in fiscal year 2011.the four agencies in our review received in fiscal year 2011. Based on recent agency experiences, EULs may be a viable option for redeveloping underutilized federal real property when disposal is not possible or desirable, but agencies raised issues pertaining to EULs that affect their use or budgetary treatment. First, NASA has reported that the limitation on its authority to accept in-kind consideration has limited its ability to encourage use of EULs and investments in underutilized NASA property. Second, recognizing potential budget impacts associated with EUL leasebacks and other long-term commitments has proved challenging for VA. Although the results of our review cannot be generalized to all agencies, these challenges provide illustrative examples of the types of issues that can affect a federal agency’s decision or ability to use EULs. According to NASA officials, in-kind consideration is critical for encouraging lessees to invest in agency properties. NASA’s ability to accept in-kind consideration expired at the end of 2008; it was restored on a limited basis in 2011 exclusively for renewable energy projects. NASA officials said that this limitation in the agency’s ability to accept in- kind consideration has hindered its ability to enter into EULs that could improve the property. In particular, according to the NASA officials, prospective lessees are reluctant to make capital improvements that will have to be conveyed to the government at the end of the lease without receiving other compensation, such as a reduction in cash rent. For example, a lessee, as previously discussed, agreed to invest $11 million in infrastructure projects that would benefit the company during the lease but benefit the government during and after the lease in return for a reduction in the lessee’s cash rent payments. Representatives from NASA and the lessee told us that this provision was critical to successfully negotiating the EUL. VA officials said that assessing and recognizing the budget impacts of EULs is complicated and maybe interpreted differently by agencies with EUL authority. In particular, VA EULs can include long-term commitments that are recognized in the federal budget in different ways. OMB’s Circular No. A-11 guidance specifies that lease obligations be recorded when the contract is signed; sufficient budget authority must be available at that time to cover the obligation. However, the obligated amount that is to be recorded differs by type of lease. For capital leases and lease purchases, OMB Circular A-11 states that the amount obligated should equal the net present value of these lease payments over the full term of the lease. For operating leases, OMB Circular A-11 states that agencies should record an amount equal to the total payments under the full term of the lease or the first year’s lease payments plus cancellation costs.VA views EUL leasebacks as operating leases and consequently does not obligate the total amount of these commitments upfront in its budget. VA’s leaseback costs are nearly $16 million annually (see table 3), but VA and CBO disagree on the extent to which VA should account for the budget impacts for EULs that could include long-term government commitments. For example, VA’s leaseback costs for its Chicago West Side EUL were about $3.5 million in fiscal year 2011. VA regards its underlying office and parking purchase agreements as 2-year operating leases, as opposed to capital leases or lease purchases. VA officials said that the department is properly treating the office and parking purchase agreements as operating leases, because VA can cancel the office and parking leasebacks at the end of each 2-year agreement. However, in a 2003 report to Congress on the budgetary treatment of leases, CBO found that VA used this enhanced use lease to obtain a $60 million regional headquarters building and parking facility. The CBO report stated that VA entered into a 35-year enhanced use lease for a four-acre site with an owner trust, with VA as the sole named beneficiary. VA subsequently leased back space in the building and the parking facility that the lessee constructed on the site. The CBO report also stated that: VA’s lease payments played a crucial role in allowing the lessee to borrow funds. VA is committed to a two-year lease of 95 percent of the space in the building and 95 percent of the parking facility; almost all of the lessee’s revenue will initially come from VA. The initial two-year lease is automatically renewed unless the VA takes specific steps at the end of the lease period to halt it. In addition, as long as VA chooses to occupy any portion of the facility it must make payments that are sufficient to cover amortization and interest on the lessee’s debt. VA also has the right to purchase the building from the lessee at any time for a price that would cover payments on the lessee’s debt. Thus, VA has a long-term commitment to cover the lessee’s capital costs even if it reduces its occupancy in the building, and this, together with an implicit right to renew the lease, would appear to make the arrangement either a lease-purchase or, if the trust is not viewed as a separate entity from VA, a government purchase financed by federal borrowing. As such, CBO concluded in its report that the intent of the West Side EUL project was to provide VA with capital assets (an office building and parking facilities for VA staff) without recording the cost of the purchase upfront in the budget. In general, we have also consistently stated that the full costs of the government’s commitments should be reflected upfront in the budget.officials said the agency made changes in subsequent EULs to address and in their view eliminate CBO’s early concerns related to EULs with leasebacks. Agencies have shown that EULs have the potential to produce mission- related and financial benefits for otherwise underutilized federal real property, but the costs and benefits of these programs are not fully understood, given different agency practices in accounting for EUL costs. Some EULs bring in large amounts of cash rent, such as the State Department’s $20.6 million Istanbul EUL and NASA’s $147.7 million EUL, but most EULs have much more modest benefits to the government where the costs could more easily outweigh the benefits. For example, the average VA EUL earned about $25,000 in cash revenue last year— financial benefits that could be outweighed by consultant, termination, and leaseback costs, which agencies have not consistently attributed to their EUL programs. Lacking clear guidance and failing to incorporate all of the costs related to agencies’ EUL programs could cause agencies to overstate the net benefits of these programs when reporting the performance of their EUL programs or making decisions about future EULs. To promote transparency about EULs, improve decision-making regarding EULs, and ensure more accurate accounting of EUL net benefits, we recommend that OMB work with VA, NASA, State, and USDA, and any other agencies with EUL authority, to ensure that agencies consistently attribute all costs associated with EULs (such as consulting, termination, and leaseback costs) to their EUL programs, as appropriate. We provided a draft of this report to the Deputy Director for Management of OMB and the Secretaries of Veterans Affairs, State, and Agriculture and the Administrator of NASA for review and comment. In commenting on a draft of this report, OMB generally agreed with GAO’s observations and recommendation. OMB emphasized that Circular No. A-11 provides guidance on budget scoring and is not intended to address the costs and benefits of EULs. We amended our recommendation to reflect that there are a variety of ways to ensure that the costs of EULs are consistently tracked and reported. Veterans Affairs, State, Agriculture, and NASA generally agreed with our conclusions and the agencies provided technical comments, which we incorporated as appropriate. See appendix III for VA’s comments along with our responses to the technical comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Deputy Director for Management of OMB and the Secretaries of Veterans Affairs, State, and Agriculture, and the Administrator of NASA. Additional copies will be sent to interested congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-2834 or wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs maybe found on the last page of this report. Major contributors to this report are listed in appendix IV. Our objectives were to determine: (1) To what extent do agencies attribute the full benefits and costs of their EULs in their assessments of their EUL programs? (2) What have been the experiences of agencies in using their EUL authority? To address these both of these objectives we reviewed prior GAO reports on enhanced use leasing and capital financing, and contacted the Office of Management and Budget (OMB), Congressional Budget Office (CBO) and 11 agencies: (1) Veterans Affairs (VA), (2) National Aeronautics and Space Administration (NASA), (3) Department of State (State), (4) Department of Agriculture (USDA), (5) General Services Administration (GSA), (6) Department of Energy (Energy), (7) Department of Interior (Interior), (8) Department of Justice (DOJ), (9) United States Postal Service (USPS), (10) St. Lawrence Seaway Development Corporation (SLSDC), and (11) Tennessee Valley Authority (TVA) based on size or evidence of EUL authority. We identified the 11 agencies based on our review of property data and documents from: (1) The 7 largest civilian real property holding agencies, by total square footage, as of fiscal year 2010, as listed in the Federal Real Property Profile, (2) GSA’s Real Property Authorities for Federal Agencies (2008), (3) Agencies’ Authorities Regarding EULs and Real Property Sales from GAO-09-283R, and (4) interviews with officials from agencies identified in the above 3 sources to determine if they used EULs and if they knew of any other agencies that used EULs. Using information from the 11 agencies we contacted, we selected the 4 agencies (VA, NASA, State and USDA) that have used their EUL authority to enter into EULs. We selected 16 case study EULs from the four agencies that have EULs based on a range of lease purposes (e.g., leasing of vacant land for development and leasing unused office space); estimated financial benefits (e.g., cash benefits and in-kind consideration); and varying geographic locations. The case studies were located in Chicago, IL; North Chicago, IL; Mountain Home, TN; Vancouver, WA; Somerville, NJ; Moffett Field, CA; Beltsville, MD; Fort Howard, MD; Paris, France; Istanbul, Turkey, and Singapore. Because the 16 case studies were selected based on a non-probability sample, observations made based on our review of the 16 case study locations do not support generalizations about other EUL sites. Rather, the observations made provided specific, detailed examples of issues that were described by agency officials and lessees. We also interviewed agency officials at the local level and headquarters locations, and reviewed relevant laws describing agencies’ EUL authorities and agency documentation, including agencies’ regulations and guidance on enhanced use leasing. We visited the 9 case studies located in the U.S. to observe the properties firsthand, interviewed agency officials and lessees about their experience with EULs at these locations, and reviewed documentation regarding these properties. The case study EULs were located at NASA’s Ames Research Center in Moffett Field, California, VA sites in Maryland, New Jersey and Washington state, and a USDA agricultural research center in Beltsville, Maryland. For the three State case studies we did not visit, we interviewed headquarters officials and reviewed relevant documentation including site-visit reports. For the four VA sites we did not visit in Chicago, Illinois, Chicago (West Side), Illinois; North Chicago, Illinois; and Mountain Home, Tennessee, we reviewed the agreements between VA and its lessees and the past work of the Congressional Budget Office and the VA’s Office of Inspector General. We also interviewed OMB, CBO, and GSA officials to better understand government-wide views, guidance, and practices on enhanced use leasing. We conducted this performance audit from October 2011 to December 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As shown in table 3, we reviewed 16 case study EULs. We reviewed 8 VA EULs, 4 from NASA, 3 from State, and 1 from USDA. 1. VA suggested changing this paragraph related to its energy programs. We edited the paragraph to align with the related text in the body of the report to which we added context on the different types of EULs that can include long-term government commitments, including those types described in VA’s comment. 2. We made the changes suggested by VA. 3. VA indicated that it already reports its consultant costs associated with its EULs as part of its overall management costs. We continue to believe that VA should report all EUL costs as part of its EUL program specifically, including consultant costs. 4. VA indicated that it attributed termination costs to its overall program—but not its EUL program. We continue to believe that VA should report all EUL costs as part of its EUL program, including termination costs. 5. VA indicated that its leaseback of the second bay at Somerville was not identified at the time of lease execution, but this contention is not supported by the lease agreements. In addition to the contact named above, Keith Cunningham, Assistant Director; Amy Abramowitz; Melissa Bodeau; Carol Henn; Hannah Laufe; James Leonard; Sara Ann Moessbauer; Lisa G. Shibata; and Crystal Wesco made key contributions to his report. | The federal government owns underutilized properties that are costly to operate, yet challenges exist to closing and disposing of them. To obtain value from these properties, some agencies have used EULs, which are generally long-term agreements to lease property from the federal government in exchange for cash or non-cash consideration. However, agencies also incur costs for EUL programs. We have previously reported that agencies should include all costs associated with programs activities when assessing their values. This report addresses (1) the extent to which agencies attribute the full benefits and costs of their EULs in their assessments of their EUL programs and (2) the experiences of agencies in using their EUL authority. GAO reviewed property data and documents from the largest civilian including four agencies that use federal real property agencies EULsVA, NASA, the Department of State, and the Department of Agricultureand applicable laws, and regulations and guidance. GAO visited nine sites where agencies were using EULs. Agency officials told us that enhanced use leases (EUL) help them utilize their underutilized property better; commonly cited benefits include enhanced mission activities, cash rent revenue, and value received through in-kind consideration. However, some agencies we reviewed do not include all costs associated with their EULs when they assess the performance of their EUL programs. Guidance from the Office of Management and Budget (OMB) does not specify what costs agencies should include in their EUL evaluations, resulting in variance among agencies. For example, the Department of Veterans Affairs (VA) and the Department of State do not consistently attribute EUL-related costs of consultant staff who administer the leases, and VA does not attribute various administrative costs that offset EUL benefits. Without fully accounting for all EUL costs, agencies may overstate the net benefits of their EUL programs. Based on recent agency experiences, EULs may be a viable option for redeveloping underutilized federal real property when disposal is not possible or desirable, but two agencies raised issues pertaining to EUL use that affect their use or budgetary treatment. First, National Aeronautics and Space Administration (NASA) officials said that the limit on its authority to receive in-kind consideration as part of its EUL program has limited its ability to encourage the use of EULs for underutilized NASA property. Specifically, NASA officials said prospective lessees are reluctant to make costly capital improvements to a property that will have to be returned to the government at the end of the lease without other compensation, such as a reduction in cash rent. Second, VA and CBO disagree on the extent to which VA should account for the budget impacts for EULs that could include long-term government commitments. VA has made multi-year commitments with certain EULs without fully reporting them in its budget. Assessing and recognizing the budget impacts of EULs is complicated and may be interpreted differently by agencies with EUL authority. In particular, VA EULs can include long-term commitments that are recognized in the federal budget in different ways. To promote transparency about EULs, improve decision-making regarding EULs and ensure more accurate accounting of EUL benefits, GAO recommends that OMB coordinate with affected agencies to ensure that agencies consistently attribute all relevant costs associated with EULs to their EUL programs. Agencies generally agreed with our findings and recommendation. |
Specialty hospitals have become a subject of debate among health care policymakers. One issue concerns physician ownership of specialty hospitals and whether such ownership might inappropriately affect physicians’ clinical decision-making and referral behavior. A related issue concerns the potential for specialty hospitals to benefit financially by treating patients who are less severely ill, and therefore less costly, while leaving general hospitals responsible for a mix of patients who need more care and are more expensive to treat. Our April 2003 report provided information on both issues: the extent of physician ownership at specialty hospitals and the relative severity of patients’ illnesses at specialty and general hospitals. Much of the concern about specialty hospitals centers on physician ownership issues. Federal law generally prohibits physicians from referring Medicare patients for specific health care services to facilities in which they (or their immediate family members) have financial interests.This prohibition, a key component of the Medicare self-referral or Stark law (named after its chief sponsor in the House of Representatives, Representative Pete Stark) was enacted after several studies found that physicians with ownership interests in separate clinical laboratories, diagnostic imaging centers, or physical therapy providers tended to make more referrals to them and order substantially more services at higher costs. The Stark law contains an exception that is relevant in the case of referrals to specialty hospitals. The law includes an exception that permits physicians who have an ownership interest in an entire hospital and who also are authorized to perform services there to refer patients to that hospital. The premise is that any referral or decision made by a physician who has a stake in an entire hospital would produce little personal economic gain because hospitals tend to provide a diverse and large group of services. However, the Stark law does prohibit physicians who have ownership interest only in a hospital subdivision from referring patients to that subdivision. With respect to specialty hospitals, the concern exists that, as these hospitals are usually much smaller in size and scope than general hospitals and closer in size to hospital departments, the exception to Stark could allow physician owners to influence their hospitals’—and therefore their own—financial gain through practice patterns and referrals. The question of favorable patient selection—the contention that specialty hospitals treat a more financially favorable selection of patients as compared to general hospitals—has added to the debate about the advantages and drawbacks of specialty hospitals. This issue is linked to the way hospitals are paid. The fixed-rate, lump-sum payments that Medicare and many other health care payers typically make to hospitals for inpatient care for patients with a given diagnosis, regardless of the costs of serving particular patients, are designed to promote efficiency by discouraging hospitals from providing unnecessary services as a way to boost revenues. However, these lump-sum payments foster undesirable incentives, as hospitals may gain financially by serving a disproportionate share of lower-cost patients with the same diagnoses. Medicare’s hospital payment system rules illustrate this principle. Under its system of prospective payments, Medicare pays a predetermined rate for each hospital discharge, based on the patient’s diagnosis and whether the patient received surgery. In other words, the payments reflect an average bundle of services that the beneficiary is expected to receive as an inpatient for a particular diagnosis. Discharges are classified according to a list of DRGs. DRG payment rates are based on the expected cost of the diagnosis group’s typical case compared with the cost for all Medicare inpatient cases. The DRG payment is not adjusted for within-DRG differences in severity of illness. Therefore, hospitals have a financial incentive to treat as many patients as possible whose costs are low relative to the costs of the average patient in each DRG. Our April 2003 study found that 21 out of 25 specialty hospitals treated a lower percentage of patients who were severely ill compared with patients in the same diagnosis categories treated at general hospitals in the same urban areas. For example, in an urban area in Texas, 3 percent of an orthopedic hospital’s patients with that hospital’s most common diagnoses were classified as severely ill, as compared with 8 percent of patients with the same diagnoses treated by the area’s more than four dozen general hospitals. In an urban area in Arizona, about 17 percent of a cardiac hospital’s patients with that hospital’s most common diagnoses were classified as severely ill, as compared to 22 percent of patients with the same diagnoses treated by the area’s more than two dozen general hospitals. Not all specialty hospitals treated patients who were, by comparison, less sick. Two of the 25 specialty hospitals treated a higher percentage of severely ill patients and two others treated about the same percentage as area general hospitals. In examining the illness severity differences between specialty and general hospitals, we did not determine the clinical or economic importance of these differences. For-profit status is a salient characteristic of specialty hospitals we identified. More than 90 percent of the specialty hospitals that have opened since 1990 were for-profit. Overall, 74 percent of specialty hospitals are for-profit, as compared to about 20 percent of all general hospitals. (See table 1.) For-profit status varied somewhat by specialty type, ranging from 78 percent of orthopedic hospitals to 65 percent of women’s hospitals. In our April 2003 report, we found that 70 percent of the more than 100 specialty hospitals in existence or under development had some degree of physician ownership. Among specialty hospitals with any degree of physician ownership, physicians’ combined ownership shares averaged slightly more than 50 percent of the hospital. Physicians’ combined ownership tended to be somewhat smaller at cardiac hospitals (31 percent) and larger at surgical hospitals (70 percent). The degree of individual physician ownership varied by hospital, but was generally low. At approximately half of all specialty hospitals with physician ownership, the average share owned by an individual physician was less than 2 percent. The share of a specialty hospital owned in the aggregate by the physicians in a revenue-sharing group practice could be much higher. At more than half of the specialty hospitals with physician owners, physicians in a single group practice owned more than 25 percent of the hospital. The majority of physicians who worked in specialty hospitals had no ownership interest in the facilities. Overall, approximately 73 percent of physicians with admitting privileges to specialty hospitals were not investors in their hospitals. (See fig. 1.) The percentage of admitting physicians who were investors varied by specialty hospital type, ranging from about 7 percent at women’s hospitals to about 44 percent at surgical hospitals. We identified three basic business structures for specialty hospitals. Our survey results indicated that about one-third of specialty hospitals were independent. Most of these hospitals were orthopedic or surgical and 76 percent had some degree of physician ownership. Approximately one-third of specialty hospitals were owned in part by a specialty hospital chain. Among this group, most hospitals were cardiac or orthopedic and 76 percent had some degree of physician ownership. The remaining one-third of specialty hospitals were owned or operated in part by local general hospitals. Almost half (48 percent) of the hospitals in this last group, which varied in specialty type, had some degree of physician ownership. In 2001, specialty hospitals accounted for approximately $871 million, or 1 percent, of Medicare’s spending on hospital inpatient services. Nearly two- thirds of this amount went to cardiac hospitals. (See table 2.) Although 28 states had at least one existing specialty hospital, about two- thirds of the 100 specialty hospitals we identified were located in 7 states. The specialty hospitals that are planned to open over the next few months or years will reinforce this pattern of concentration. Specialty hospital location was associated with regulatory and demographic conditions that may facilitate or encourage hospital development. Specialty hospitals are concentrated in seven states: Arizona, California, Kansas, Louisiana, Oklahoma, South Dakota, and Texas. Texas, with 20 specialty hospitals, had almost twice as many specialty hospitals as the state with the second highest number of specialty hospitals, California, with 11. States such as Oklahoma (9), Kansas (8), and South Dakota (7), although smaller in area and population than California, had nearly as many specialty hospitals. The remaining 21 states with specialty hospitals had between 1 and 4 specialty hospitals each. (See fig. 2.) The specialty hospitals that are planned to open over the next few months or years will tend to reinforce the existing pattern of geographic concentration. In June 2003, at least 26 specialty hospitals were under development in 10 states. (See fig. 3.) Nine of the 10 states that had one or more specialty hospitals under development already had at least 1 existing specialty hospital. About 60 percent of specialty hospitals under development were located in three states: Texas had 7; California, 5; and Louisiana, 4. Seven other states had 1 or 2 specialty hospitals that were under development as of June 2003. Based on the specialty hospitals known to be under development, the number of surgical hospitals will increase by 65 percent and the number of cardiac hospitals will increase by approximately 40 percent in the next few months or years. Seven cardiac hospitals, 2 orthopedic hospitals, and 17 surgical hospitals are under development. The location of specialty hospitals is strongly correlated to whether states allow hospitals to add beds or build new facilities without first obtaining state approval for such health care capacity increases. All of the specialty hospitals that are under development and 96 percent of the specialty hospitals that opened from 1990 to June 2003 are located in such states. (See table 3.) State requirements for prior approval to increase health care capacity are commonly referred to as certificate of need (CON) laws or requirements. Federal legislation enacted in 1975 to promote comprehensive planning and development of hospitals and other health care resources conditioned funding to states on their establishment of CON requirements. At that time, many policymakers contended that CON requirements could prevent the construction of unnecessary capacity and help control health care costs. CON opponents argued that such requirements could stifle competition and lead to higher health care costs. Whether CON requirements achieved their objectives was inconclusive,and in 1986 the federal legislation was repealed. Subsequently, several states dropped their CON requirements. In 2002, 37 states maintained CON requirements to varying degrees. Overall, 83 percent of all specialty hospitals, 55 percent of general hospitals, and 50 percent of the U.S. population are located in states without CON requirements. Eighty-five percent of specialty hospitals are located in urban areas, a distribution that is roughly proportional to that of the U.S. population. An urban location was slightly more prevalent among women’s hospitals (90 percent) and slightly less prevalent among cardiac hospitals (78 percent). Specialty hospitals also tended to locate in counties where the population growth rate from April 1990 through April 2000 far exceeded the national average of 11.1 percent. About 43 percent of specialty hospitals that opened in 1990 or later are located in counties where the population grew by 20 percent or more between the 1990 and 2000 decennial censuses.There did not appear to be a consistent relationship between specialty hospital location and a relative abundance or shortage of local health care resources, as measured by physicians per capita or hospital beds per capita. Relative to general hospitals, specialty hospitals, as a group, were much less likely to have emergency departments, saw fewer patients in their emergency departments, treated smaller percentages of Medicaid patients, and derived a smaller share of their revenues from inpatient services. However, there were important differences among the four specialty hospital types in these and other service indicators, such as the extent to which hospitals’ emergency departments focused on certain medical conditions or procedures. Several differences with respect to emergency departments highlight the contrast between specialty hospitals and general hospitals and also the contrast among the four types of specialty hospitals. The four specialty hospital types were less likely than general hospitals to have emergency departments, but the prevalence of emergency departments varied by specialty hospital type. Overall, 45 percent of specialty hospitals had emergency departments, compared with 92 percent of general hospitals. (See fig. 4.) The prevalence of emergency departments in specialty hospitals ranged from 72 percent of the cardiac hospitals to 33 percent of the orthopedic hospitals. The emergency departments at specialty hospitals treated less than one- tenth the median number of patients treated at the emergency departments of general hospitals. (See table 4.) The number of patients treated at general hospitals’ emergency departments remained greater when hospital size was accounted for: the median number of patients treated per bed per month was about 12 at general hospitals’ emergency departments and slightly less than 3 at specialty hospitals’ emergency departments. Based on the responses to our 2003 survey, the emergency departments at specialty hospitals often appeared to have missions that were focused on certain medical conditions or procedures. For example, 95 percent of the patients at orthopedic hospitals’ emergency departments were orthopedic patients, and 93 percent of the patients at surgical hospitals’ emergency departments were surgical patients. The median percentage of emergency department patients who fit within the hospital’s field of specialization was lower at cardiac hospitals (57 percent). Specialty hospital types varied in how many had a physician around-the- clock in their emergency departments. Overall, 63 percent of specialty hospitals that had emergency departments, and that responded to our staffing questions, reported having a physician staffing the department 24 hours a day. (See table 5.) Cardiac hospitals were the most likely to have 24-hour physician staffing. Eleven of the 13 cardiac hospitals responded to our survey question. All 11—100 percent—indicated that they had 24-hour physician staffing of their emergency departments. Response rates to the staffing question were far lower among other specialty hospital types— approximately 60 percent of the orthopedic and surgical hospitals with emergency departments, and 30 percent of the women’s hospitals with emergency departments, answered the staffing question. Among the surgical and orthopedic hospitals with emergency departments that did respond, one-third or less reported having a physician in the department 24 hours per day. Two of the three women’s hospitals that provided staffing information reported having a physician in their emergency departments 24 hours per day. The contrast between specialty and general hospitals was also marked with respect to the share of public program inpatients treated and inpatient services provided. Relative to general hospitals in the same urban areas, specialty hospitals in our HCUP sample tended to treat a lower percentage of Medicaid inpatients among all patients with the same types of conditions. (See fig 5.) For example, Medicaid beneficiaries constituted 28 percent of obstetric and gynecological (OB/GYN) patients at women’s hospitals, but 37 percent of the OB/GYN patients at area general hospitals. The pattern for Medicare inpatients served differed somewhat from that for Medicaid patients. Relative to area general hospitals, cardiac hospitals tended to have larger shares of Medicare cardiac patients. (See fig. 6.) Medicare patients constituted similar shares of surgical patients at surgical specialty and area general hospitals and of gynecological patients at women’s specialty and area general hospitals. In contrast, orthopedic hospitals served a lower percentage of Medicare orthopedic inpatients than did area general hospitals. Dissimilarity between specialty and general hospitals was noticeable in the mix of inpatient and outpatient revenues. For the four specialty hospital types, hospitals that responded to our survey reported that inpatient revenues accounted for about 46 percent of their total revenues, compared with about 57 percent of total revenues for general hospitals. (See fig. 7.) However, percentage of inpatient business varied substantially by specialty hospital type. For example, about 25 percent of surgical hospitals’ revenues were derived from their inpatient business. Their mix of services may, in part, reflect the fact that some of these hospitals started as ambulatory surgical centers—distinct facilities that perform outpatient surgery exclusively—and later added inpatient capacity. The percentage of inpatient revenues at orthopedic hospitals (approximately 37 percent) was somewhat higher than the percentage at surgical hospitals. Inpatient revenues made up about 58 percent of total revenues at the women’s hospitals, which was similar to the proportion at area general hospitals (57 percent). In contrast, cardiac hospitals derived 85 percent of their revenues from their inpatient business. Although a general hospital typically had more beds than a specialty hospital had, the focused mission of a specialty hospital often resulted in its treating more patients with a given condition. Financially, specialty hospitals overall tended to perform about as well as general hospitals did on their Medicare inpatient business. However, for-profit specialty hospitals did not do as well, on average, as for-profit general hospitals. When the costs from all lines of business and the revenues from all payers were considered, specialty hospitals tended to outperform general hospitals. Specialty hospitals in our HCUP sample were generally not small relative to general hospitals when the comparison was based upon the number of patients treated for specific conditions. For example, 1 cardiac hospital treated nearly 4,000 cardiac patients in 2000. Among the 26 general hospitals that also treated cardiac patients in the same urban area, the median number treated was approximately 2,000. Each of the 7 cardiac hospitals in our HCUP sample treated more patients than the median general hospital’s cardiac practice in the specialty hospitals’ market areas. A similar relationship to general hospitals existed among the HCUP orthopedic and women’s hospitals. Six of the 8 orthopedic hospitals and 6 of the 7 women’s hospitals treated more patients than were treated in the comparable departments of the median general hospitals in their markets. In contrast, 2 of the 3 surgical hospitals performed fewer inpatient surgical procedures relative to the general hospitals in their markets. In some cases, a specialty hospital treated far more patients with certain conditions than did any of the general hospitals in the same urban area. For example, 1 orthopedic hospital in our HCUP sample treated approximately 7,400 orthopedic patients in 2000. In contrast, the largest number of orthopedic patients treated at any of the 73 general hospitals in the same urban area was just over 3,000. In all, 4 of the 25 HCUP specialty hospitals—1 cardiac, 2 orthopedic, and 1 women’s—had higher patient volumes than did the comparable departments at all of the general hospitals in their markets. These hospitals represent the extreme end of the relative size spectrum. The median cardiac and orthopedic hospitals treated somewhat more than twice the number of patients treated in the comparable departments of the median general hospital in their markets. The median women’s hospital was about 80 percent larger in patient volume than the median comparable department at general hospitals in the area. Specialty hospitals’ market shares, measured as the percentage of inpatient claims in an urban area, were much higher when only claims within a particular specialty field were included instead of all inpatient claims. (See fig. 8.) In markets that had from 5 to 26 general hospitals that treated cardiac patients, cardiac hospitals had a median market share of 15 percent of the cardiac patients. The median market share was 8 percent among women’s hospitals, in markets that contained from 7 to 86 general hospitals, and 5 percent among orthopedic hospitals, in markets that contained from 10 to 86 general hospitals. Surgical hospitals’ median market share of 4 percent was the smallest among the four specialty hospital types. However, there was wide variation in the market shares of individual hospitals—especially among women’s hospitals. For example, 1 women’s hospital had a 2 percent market share while another had a 47 percent market share. Financially, specialty hospitals tended to perform about as well as general hospitals did on their Medicare inpatient business in fiscal year 2001—the most recent year for which this information is available. Medicare inpatient margins—which are used to gauge a hospital’s financial performance on Medicare inpatient business—averaged 9.4 percent at specialty hospitals and 8.9 percent at general hospitals. (See table 6.) Among for-profit hospitals—both specialty and general hospitals—average Medicare inpatient margins were higher. However, for-profit general hospitals had average Medicare inpatient margins (14.6 percent) that exceeded those at for-profit specialty hospitals (12.4 percent). When revenues and costs from all lines of business and all payers were included, the average financial performance of specialty hospitals exceeded that of general hospitals. Total facility margins—constructed similarly to Medicare inpatient margins—averaged 6.4 percent among all specialty hospitals and 3.1 percent among all general hospitals. Among both specialty hospitals and general hospitals, the average total margin at for-profit hospitals was higher than the total margin among all hospitals. We obtained comments from officials representing ASHA—a specialty hospital association—and from officials representing the MedCath Corporation and NSH—two major specialty hospital chains. The officials generally agreed with the information in our report and offered their views on reasons for key differences between specialty and general hospitals. Their comments, summarized below, largely pertained to our findings regarding hospital location, presence and utilization of emergency departments, and hospitals’ financial performance. Unless otherwise noted, the following comments reflect the positions of all three organizations. In response to our finding that, on average, the number of physicians per capita and the number of hospital inpatient beds per capita are the same in communities with and without specialty hospitals, MedCath officials said that they have a national strategy in which they project communities’ health care needs several years into the future and use the results to help them choose potential locations for new cardiac hospitals. MedCath officials said that this explains why specialty hospitals tend to locate in areas experiencing rapid population growth. An ASHA official said that, among the association’s members, the decision to build a specialty hospital begins with physicians in a community and their perception of the community’s health care needs. Specialty hospital representatives stressed that the existence and utilization of an emergency department is primarily a function of the mission of a particular hospital. They said that a specialty hospital might not include an emergency department if the hospital’s intended role in a community does not call for one. NSH officials noted that nonprofit general hospitals receive tax advantages in return for providing certain community services, including emergency care. MedCath officials said that, because nonprofit hospitals are required to fulfill certain social needs, our comparisons involving emergency departments and treatment of Medicaid patients should have been made between for-profit specialty hospitals and for-profit general hospitals. ASHA officials added that state law may dictate whether a hospital has an emergency department. MedCath officials noted that our results showed that, on average, specialty hospitals’ margins are similar to for-profit general hospitals’ margins. They said that this financial performance was the result of a business model that emphasizes efficiency and cost control in the delivery of quality health care. Overall, MedCath officials said that our findings showed that specialty hospitals should be no cause for concern. Specifically, the officials said that there are relatively few specialty hospitals, specialty hospitals account for a very small fraction of total Medicare inpatient hospital spending, such hospitals are concentrated in a few states and in areas where there is a need for such hospitals, and their business model leads to profits that are similar to the profits earned by for-profit general hospitals. Representatives from all three organizations, while generally agreeing with the information in our report, emphasized the important role that specialty hospitals play in efficiently providing quality health care. We agree that, on a national level, specialty hospitals have a small presence. However, in the communities in which they locate, specialty hospitals may treat a relatively large share of patients who have specific medical conditions or need specific medical procedures. For the share of the market that those patients represent, specialty hospitals are often among the larger competitors that general hospitals face. In addition, the number of specialty hospitals is growing rapidly. In the next few months or years, the number of specialty hospitals that we identified is expected to increase by at least 25 percent. The policy issue regarding emergency care may be one that is focused more on access to such care and less on whether every specialty hospital should have an emergency department. Although some specialty hospitals—especially cardiac hospitals—provide at least a limited amount of emergency care, individuals who need emergency care typically must obtain treatment at general hospitals. Critics of specialty hospitals are concerned that such facilities may erode the financial health of general hospitals and impair their ability to provide emergency care and meet other basic community needs, such as stand-by capacity to respond to communitywide disasters. In this report, we did not attempt to determine the financial effect that specialty hospitals may have on neighboring general hospitals. Finally, we previously reported that the 25 urban specialty hospitals that we studied in six states tended to treat patients who were less severely ill relative to patients treated at neighboring general hospitals. Because we did not analyze the economic impact of such a pattern, we cannot determine the extent to which the financial performance of specialty hospitals may be due to patient mix, the efficient delivery of health care, or other factors. We are sending copies of this report to appropriate congressional committees and other interested parties. We will also make copies available to others upon request. This report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions, please call me at (202) 512-7101 or James Cosgrove at (202) 512-7029. Other contributors to this report include Hannah Fein, Zachary Gaumer, and Ariel Hill. This appendix provides additional information on the key aspects of our analysis. First, it lists the criteria we used to define specialty hospitals and the process we followed to identify them. Second, it discusses the survey used to collect a variety of information from the universe of specialty hospitals. Third, it describes key data sources and methodological approaches used in each subanalysis. Finally, it address issues related to data reliability and limitations. Although a standard definition for a specialty hospital does not exist, a reasonable approach is to define specialty hospitals as those that predominately treat certain diagnoses or perform certain procedures. For this report, we classified a hospital as a specialty hospital if the data indicated that two-thirds or more of its inpatient claims were in one or two major diagnosis categories (MDC) or two-thirds or more of its inpatient claims were for surgical diagnosis- related groups (DRG). Because our study focused on private, short-term acute care hospitals, we eliminated from consideration hospitals that were government-owned and those that tended to provide long-term care or otherwise had missions very different from those of short-term, acute care general hospitals. Thus, we excluded government-owned hospitals; hospitals for which the majority of inpatient claims were for MDCs that related to rehabilitation, psychiatry, alcohol and drug treatment, children, or newborns; and hospitals with fewer than 10 claims per bed per year. Of the hospitals that met our criteria, 100 could be classified into four specialization categories: cardiac, orthopedic, surgical, and women’s.Twenty-six specialty hospitals were also identified as under development and scheduled to open in the next few months or years. An additional 6 hospitals specialized in a variety of other areas—such as eye or ear, nose, and throat procedures—but were not included in this analysis. For this report, we focused on the specialty hospitals in the four major categories listed above. We applied our criteria to inpatient discharge data from two different data sources: the 2001 Medicare Provider Analysis Review (MedPAR) file and the 2000 Healthcare Cost and Utilization Project (HCUP) state inpatient data from six states. Medicare and HCUP data both have distinct advantages and disadvantages. The MedPAR file contains patient information from virtually all of the nation’s hospitals, but only for Medicare patients. Patients covered by Medicare are predominately age 65 or older. Consequently, some conditions—such as those that affect women of childbearing age—may be underrepresented, or not represented at all, in the MedPAR file. Thus, it is likely that an identification based on the MedPAR file undercount the number of hospitals that specialize in treating such conditions. In contrast to Medicare data, HCUP data provide information on all of a hospital’s patients. However, HCUP data are available for hospitals in only 29 states, and each state’s data must be purchased separately. We obtained HCUP data from the following six states: Arizona, California, New Jersey, New York, North Carolina, and Texas. These states were selected because Medicare data identified them as having potentially large concentrations of specialty hospitals. To identify specialty hospitals that opened too recently to be included in the Medicare or HCUP data, we obtained information from the American Surgical Hospital Association, the American Federation of Hospitals, and two national specialty hospital chains: National Surgical Hospitals and MedCath Corporation. These organizations also provided information on the 26 specialty hospitals that are under development. From January 2003 through March 2003, we conducted a survey of 100 cardiac, orthopedic, surgical, and women’s hospitals that we identified as being operational. The survey gathered basic hospital address information and posed questions pertaining to the types of services offered at each hospital, hospital size, physician ownership, partnership structure, and the extent of emergency department services. Eighty percent of the specialty hospitals that received our survey responded. Information pertaining to physician ownership of specialty hospitals was drawn from hospital responses to our 2003 specialty hospital survey. Among the questions related to physician ownership, hospital representatives were asked about the number of physician owners, the overall percentage of the hospital owned by physicians, the largest share owned by a single physician, the overall number of admitting physicians, and the largest combined percentage of the hospital owned by physicians in a single revenue-sharing group practice. Information pertaining to the business structure of each specialty hospital was drawn from responses to our 2003 specialty hospital survey. Hospitals were grouped into one of three categories-independent freestanding hospitals, hospitals associated with a hospital chain, or hospitals associated with a local general hospital—based on their responses to questions regarding hospital affiliation. We identified state, county, and zip code location of existing specialty hospitals and those under development through a four-part process. First, we identified the name and identification number of each specialty hospital by using the Centers for Medicare & Medicaid Service’s (CMS) MedPAR file or the HCUP dataset. Second, we located these names and identification numbers in CMS’s Medicare Provide of Services File (POS), because it contains the most current location information available. If these hospitals were not found in POS , we used the American Hospital Association’s (AHA) 2003 Annual Survey for the same purpose. Third, when specialty hospitals were not found in the CMS or AHA databases, we located as much information as possible using the Internet or direct telephone contact. Fourth, our specialty hospital survey (2003) provided county location information and other missing address or location information. Data from the American Health Planning Association (AHPA) were used to determine which states require hospitals to obtain state approval before they may add beds or build new facilities. State regulations that require prior approval for state health care capacity increases are commonly referred to as certificate of need (CON) requirements. AHPA’s document, “2002 Relative Scope and Review Thresholds of CON Regulated Services,” listed 37 states that have one or more of the approximately 30 different types of CON requirements. For the purposes of this report, we considered a state to have CON requirements if it required prior approval for new acute care beds. We used data from the Dartmouth Atlas of Health Care to determine the number of available beds per capita and physicians per capita in a hospital referral region (HRR). HRRs represent regional health care markets for tertiary medical care. Each HRR contains at least one hospital that performed major cardiovascular procedures or neurosurgery. We analyzed the overall relationship between specialty hospital location and health system resources by comparing the average number of beds and physicians per 1,000 people in HRRs with and without specialty hospitals. We relied on several data sources to obtain information pertaining to the provision of emergency care at specialty and general hospitals. To determine whether a specialty hospital had an emergency department, we primarily relied upon the hospital’s response to our specialty hospital survey. When that information was missing, we used the information contained in CMS’s POS file or contacted the hospital’s administrator. As a result, our finding regarding the percentage of specialty hospitals with emergency departments is based on data from all of the 100 specialty hospitals that we identified. The information pertaining to the existence of emergency departments at general hospitals was drawn from AHA’s 2003 Annual Survey of Hospitals. Emergency department utilization data for specialty hospitals were obtained from hospital responses to the specialty hospital survey, while utilization data for general hospitals were drawn from our 2002 general hospital survey. We obtained information on specialty hospitals’ staffing of emergency departments from our specialty hospital survey. Comparable staffing information for general hospitals was not readily available. To determine the mean percentage of Medicare and Medicaid patients at specialty and general hospitals, we analyzed 2000 HCUP data from Arizona, California, New Jersey, New York, North Carolina, and three of five regions in Texas. Our analysis of HCUP data for these six states identified 25 specialty hospitals and 396 general hospitals in 18 urban areas. For each specialty hospital type, we first computed the percentage of specialty hospital claims within that type’s field of specialization that were paid by Medicaid. For example, we calculated the percentage of cardiac hospitals’ cardiac claims that were paid by Medicaid. We then computed the percentage of general hospital claims in the same field of specialization that were paid by Medicaid. Only general hospitals located in urban areas with a relevant specialty hospital were included. Continuing the previous example, we calculated the percentage of cardiac claims paid by Medicaid at general hospitals located in urban areas with a cardiac hospital. We followed a similar process for computing the percentage of Medicare claims at specialty and general hospitals. Using 2000 HCUP data, we computed a local inpatient market share for each of the 25 urban specialty hospitals in our six HCUP states. The number of inpatient claims at each specialty hospital was divided by the total number of inpatient claims at all hospitals—both specialty and general—in the same metropolitan statistical area (MSA) . We then determined the median market share for specialty hospitals, by specialty type. We followed a similar process to determine the local market shares of specialty hospitals within their fields of specialization. For example, we compared the number of cardiac claims at a cardiac hospital to the total number of cardiac claims at all hospitals within the same MSA. We used data from CMS’s 2001 Hospital Cost Report (HCR) to calculate Medicare and total margins for specialty and general hospitals. Although not yet complete, the 2001 HCR file includes information from 55 specialty hospitals and approximately 84 percent (5,166) of the individual hospital records contained in the 1999 HCR file. To calculate the profit margins of specialty and general hospitals, we utilized a formula created by the Medicare Payment Advisory Commission (MedPAC). We used a variety of data sources in our analysis; the three primary sources were our 2003 specialty hospital survey, 2000 HCUP data for six states, and CMS’s 2001 HCR file. In each case, we determined that the data were sufficiently reliable to address the report’s objectives. Overall, 80 percent of specialty hospitals responded to GAO’s 2003 survey, although response rates for certain questions were sometimes lower. In cases where question responses were unclear, we contacted the hospital administrators to resolve any ambiguity. Because we did not independently verify the information, the report identifies data from the survey as self-reported. HCUP data are widely used for research purposes. Although the HCUP data we used represent a subset of the available HCUP data, the subset contains one-quarter of all of the specialty hospitals that we identified nationwide. HCR data are routinely used by the MedPAC to estimate hospital margins and recommend updates to Medicare’s hospital payment rates. We followed the same procedures used by the MedPAC to estimate hospital margins from these data. The 2001 file we used was 84 percent complete at the time of our analysis. We compared these data to data from prior years and consulted with MedPAC experts to determine that this degree of completeness would produce reliable margin estimates. | The recent growth in specialty hospitals that are largely for-profit and owned, in part, by physicians, has been controversial. Advocates of these hospitals contend that the focused mission and dedicated resources of specialty hospitals both improve quality and reduce costs. Critics contend that specialty hospitals siphon off the most profitable procedures and patient cases, thus eroding the financial health of neighboring general hospitals and impairing their ability to provide emergency care and other essential community services. Critics also contend that physician ownership of specialty hospitals creates financial incentives that may inappropriately affect physicians' clinical and referral behavior. In April 2003, GAO reported on certain aspects of specialty hospitals, including the extent of physician ownership and the relative severity of patients treated (GAO-03-683R). For this report, GAO was asked to examine (1) state policies and local conditions associated with the location of specialty hospitals, (2) how specialty hospitals differ from general hospitals in providing emergency care and serving a community's other medical needs, and (3) how specialty and general hospitals in the same communities compare in terms of market share and financial health. The 100 existing specialty hospitals identified by GAO--hospitals that focus on cardiac, orthopedic, or women's medicine or on surgical procedures--are geographically concentrated in areas where state policy facilitates hospital growth. Although 28 states have at least 1 specialty hospital, approximately two-thirds of the 100 specialty hospitals are located in 7 states. At least an additional 26 specialty hospitals were under development in 2003 and will tend to reinforce the existing pattern of geographic concentration. Specialty hospitals are much more likely to be found in states where hospitals are permitted to add beds or build new facilities without first obtaining state approval for such health care capacity increases. Relative to general hospitals, specialty hospitals, as a group, were much less likely to have emergency departments, treated smaller percentages of Medicaid patients, and derived a smaller share of their revenues from inpatient services. For example, 45 percent of specialty hospitals, but 92 percent of general hospitals, had emergency departments. There were, however, important differences among the four specialty hospital types in these and other service indicators. Although general hospitals typically have more beds than specialty hospitals, the focused mission of specialty hospitals often resulted in their treating more patients in their given fields of specialization. Financially, specialty hospitals tended to perform about as well as general hospitals did on their Medicare inpatient business. However, specialty hospitals tended to outperform general hospitals when the costs from all lines of business and the revenues from all payers were considered. Officials from three specialty hospital organizations commented on a draft of this report. They generally agreed with the report's information and commented on key differences between specialty and general hospitals. |
This section provides an overview of material criticality and federal agencies’ critical materials roles. There is no single federal government-wide definition or list of what constitutes a critical material and different assessments have demonstrated that there are a wide variety of materials that are critical to U.S. economic and national security interests. In a 2008 study on critical minerals, the National Academies of Sciences, Engineering, and Medicine’s Committee on Critical Mineral Impacts on the U.S. Economy developed a matrix to assess the criticality of a given mineral (see fig. 1). The horizontal axis represents the availability and reliability of the mineral supply (supply risk), and the vertical axis represents the importance of the mineral (impact of supply restriction). The degree of criticality increases from the lower-left to the upper-right corner of the figure, such that mineral A is considered more critical than mineral B. A determination that a mineral or other type of material is critical is generally based on some measure of the material’s importance, combined with a measure of the supply risk for the material. Supply risks include potential physical interruptions in the supply chain, market imbalances, and government interventions. For example, see the following: Physical disruptions in the supply chain may include war or natural disasters. Market imbalances may include oligopoly market power or inability to adjust supply quickly in response to changes in demand. Government interventions may include export bans or restrictions on mining for environmental considerations. Vulnerability to potential supply disruption varies depending on the importance of the material in question and other factors, such as the extent to which acceptable substitute materials are available and the extent to which supply of a critical material can be adjusted quickly in response to changes in demand. For materials that are extracted as coproducts or by-products of other mining operations, increased demand may not cause mining companies to produce more of them without additional sustained demand for their primary products. For example, according to a journal article, ruthenium is obtained almost entirely as a by-product of platinum production. In late 2006, demand for ruthenium expanded rapidly, in part, because of its increased use in hard disk drives. However, the supply of ruthenium did not respond to this increased demand, and the price of ruthenium rose rapidly to $870 per troy ounce by mid-February 2007, a ninefold increase from the previous year and a 29-fold increase from a low point in 2003. The materials supply chain in figure 2 shows the steps by which materials are extracted from mines, processed, transformed into semifinished components, and incorporated into end-use applications. The supply chain also shows the potential for recycling and reusing materials from finished applications, although materials can be reclaimed at any stage of the supply chain. There are a variety of ways in which federal agencies’ activities intersect with critical materials supply issues. For example, the federal government relies on advanced technologies in which critical materials may be used to support DOD’s national defense mission. DOD is responsible for determining which materials are strategic and critical for national defense and acquiring those materials. In addition, DOE, in support of its mission of ensuring the United States’ security and prosperity by addressing its energy, environmental, and nuclear challenges through transformative science and technology solutions, is focused on the supply of critical materials given the importance of such materials to certain energy and nuclear security technologies. The federal government may also affect the development of critical materials resources through its land management and regulatory activities. For example, the Department of the Interior’s Bureau of Land Management (BLM) manages approximately 950 million acres of the nation’s land, including subsurface acres, and has a role in reviewing and approving resource extraction projects on this land. The 1980 Act establishes a national policy of promoting an adequate and stable supply of materials necessary to maintain national security, economic well-being, and industrial production with appropriate attention to a long-term balance among resource production, energy use, a healthy environment, natural resources conservation, and social needs. The 1980 Act generally does not ascribe desired outcomes and responsibility for critical materials activities to individual agencies. However, the act does require the Secretary of Commerce, in consultation with other agencies, to continually identify and assess material needs cases to ensure an adequate and stable supply of materials to meet national security, economic well-being, and industrial production needs. The act also charges the President, through the Executive Office of the President, with coordinating federal departments and agencies to undertake a variety of activities to implement this policy, including establishing early warning systems for materials supply problems; promoting a vigorous, comprehensive, and coordinated program of materials research and development; encouraging federal agencies to facilitate availability and development of domestic resources to meet critical materials needs; providing for improved collection, analysis, and dissemination of scientific, technical, and economic materials information and data from federal, state, and local governments and other sources as appropriate; and assessing federal policies that adversely or positively affect all stages of the materials cycle, from exploration to final product recycling and disposal. The Subcommittee was organized as an interagency working group to help understand the issues that surround the production and use of critical materials, and to focus the government’s resources on mitigation of critical materials supply risks. The Subcommittee, initially chartered in 2010, was rechartered in April 2016. According to its charter, the Subcommittee is to facilitate a strong, coordinated effort across federal agencies to identify and address important policy implications arising from critical and strategic mineral supply issues. The charter identifies the following federal agencies and Executive Office of the President organizations as members of the Subcommittee. Federal agencies Department of Agriculture Department of Commerce Department of Defense Department of Education Department of Energy (co-chair) Department of Homeland Security Department of the Interior (co-chair) Department of Justice Department of Labor Department of State Department of the Treasury Environmental Protection Agency National Aeronautics and Space Administration National Science Foundation Executive Office of the President organizations Council on Environmental Quality National Economic Council National Security Council Office of Management and Budget Office of Science and Technology Policy (co-chair) Office of the U.S. Trade Representative Although the Subcommittee was not chartered to implement the 1980 Act, many of the functions identified in its charter are similar to policies outlined in the act. Examples of functions identified by the Subcommittee charter that are similar to policies in the act include implementing and, as necessary, updating the methodology developed cooperatively by Subcommittee member agencies for dynamically assessing mineral criticality and for signaling emerging critical or strategic minerals; reviewing and analyzing domestic and global policies that affect the supply of critical and strategic minerals, assessing their implications on U.S. manufacturing, and evaluating potential strategies for risk mitigation, as needed; identifying cross-agency opportunities in research and development and in education and training for addressing critical and strategic minerals across the life cycle spectrum, including extraction, processing, and recycling; and considering and offering recommendations for enhanced U.S. minerals data collection and economic analysis. The Subcommittee meets several times per year at varying intervals, according to OSTP officials. Subcommittee meeting agendas are developed by the co-chairs with input from member agencies. According to OSTP and DOE officials, agency participation on the Subcommittee is voluntary. Federal agencies are primarily focused on two areas of activity related to critical materials supply—assessing risk and supporting research—in addition to conducting a range of other activities. Agencies’ other critical materials activities include stockpiling or producing materials and reviewing and approving resource extraction projects, among other efforts. Agencies’ critical materials supply activities focus on two primary areas— assessing risk and supporting research. Federal agencies engage in a variety of activities to identify and assess risks related to critical materials supply. These activities include collecting and disseminating information on material supply and demand, conducting targeted analyses of specific sectors, and conducting broader assessments to determine which materials are critical for the U.S. economy or security. Commerce, DOD, DOE, DHS, Interior, and NASA conduct activities to identify and assess critical materials supply risk, as shown in figure 3. The following federal agencies conduct activities to identify and assess critical materials supply risk: Interior. Interior’s U.S. Geological Survey’s (USGS) National Minerals Information Center develops and provides statistics and information on the worldwide production, consumption, and flow of minerals and materials essential to the United States economy and national security. The center, established in 1996 under USGS upon the dissolution of the U.S. Bureau of Mines, produces a number of reports, including the annual Minerals Yearbook and the Mineral Commodity Summaries. The Minerals Yearbook is an annual publication that provides statistical data on approximately 90 commodities. It also includes data from over 175 countries on mineral production and trade, among other things. The Mineral Commodity Summaries is based on the data reported in the yearbook and includes data over a 5-year period. The annual summary includes similar historical data as reported in the Minerals Yearbook, as well as production estimates from the current reporting year. Interior’s BLM also collects information related to mineral resources. Although BLM generally relies on data provided by USGS, it periodically issues mineral potential reports to assess the mineral resource occurrence and development potential on land related to particular mining applications or projects. For example, in 2012 BLM issued an assessment of the mineral potential of public lands located within a proposed solar energy zone in New Mexico. As part of the assessment, BLM evaluated whether certain minerals produced in New Mexico and that are classified as strategic and critical minerals for national defense purposes, including bismuth, copper, fluorspar, manganese, tungsten, vanadium, and zinc, were found within the proposed solar energy zone. DOE. As part of its efforts to advance a clean energy economy, DOE conducted two criticality assessments on materials important to clean energy applications, such as wind turbines, electric vehicles, photovoltaic cells, and fluorescent lighting. DOE’s first assessment, published in a 2010 Critical Materials Strategy, evaluated 14 materials and identified 10, including 7 rare earth materials, as critical or near critical over the short or medium terms. DOE’s second assessment, published in a 2011 Critical Materials Strategy, assessed 16 materials and identified 10 of them as critical or near critical over the short or medium terms. As part of its 2015 Quadrennial Technology Review, DOE also published a critical materials technology assessment that reported on major trends driving future material criticality for selected clean energy applications. Additionally, DOE manages the Isotope Development and Production for Research and Applications program (Isotope Program) through which it produces and distributes radioactive and stable isotopes that are in short supply but are critical for either federal government or U.S. commercial use. As part of the Isotope Program, DOE has a process to identify high-priority isotopes by monitoring long-term changes in demand within the isotope community that could affect isotope availability. DOD. Three DOD organizations have related responsibilities for managing risks from DOD’s use of “critical” and “strategic and critical” materials: the Defense Logistics Agency-Strategic Materials (DLA- Strategic Materials), the Office of the Deputy Assistant Secretary of Defense for Manufacturing and Industrial Base Policy, and the Strategic Materials Protection Board. DOD periodically issues two reports analyzing critical materials for defense needs according to statutory definitions of “critical” and “strategic and critical” materials. The Annual Industrial Capabilities Report to Congress provides analyses of sectors of the defense industrial base, such as aircraft and ground vehicles. The biennial Strategic and Critical Materials Report on Stockpile Requirements summarizes DLA-Strategic Materials’ analyses of materials for the National Defense Stockpile. According to DLA-Strategic Materials officials and an official with DOE’s Oak Ridge National Laboratory, DLA-Strategic Materials also collaborated with DOE’s Oak Ridge National Laboratory and a private company to develop the Strategic Material Analysis and Reporting Topography software tool, which is a computer-based supply chain mapping tool that can visually represent the supply chain for any number of materials. Commerce. The department’s Bureau of Industry and Security is responsible for analyzing the capabilities of the U.S. industrial base to support national defense. The bureau conducted a strategic materials survey to evaluate the supply chains associated with several materials considered important to defense programs and systems. The resulting data set and report are intended to assist DOD in developing planning and acquisition strategies designed to ensure the availability of materials critical to defense missions. In addition to its work supporting DOD, Commerce’s International Trade Administration (ITA) convened two roundtables of industry and government participants to gather information on critical materials issues that may affect U.S. manufacturers and the competitiveness of U.S. industry. ITA’s Office of Materials Industries hosted the first roundtable in 2009 to discuss issues related to access to rare earth materials that could affect important end uses, such as clean energy technologies. ITA convened the second roundtable in 2012, in cooperation with the Subcommittee, to identify the materials, technologies, and supply chains that should be prioritized to develop an interagency assessment of critical minerals. DHS. Under the Critical Foreign Dependency Initiative, DHS identifies critical foreign infrastructure that, if disrupted, could significantly affect U.S. public health, economic vitality, industrial capability, or security. The initiative is a collaborative effort co-led by DHS and State, with other relevant agencies. According to a DHS official, such infrastructure can include mines or other production facilities that are foreign sources of critical materials, as determined by an interagency process. This assessment process involves both public and private sector partners responsible for critical infrastructure and key resources. Also, the DHS Science and Technology Directorate funded academic research examining the extent to which critical chemicals in the U.S. supply chain are being produced in foreign countries. NASA. Agency officials stated that NASA is analyzing its supply chains for materials that it deems essential to its mission. According to a 2012 presentation on its approach to critical materials management, NASA’s research and evaluation efforts target applied challenges in support of spaceflight, planetary and earth exploration, and aeronautics/aviation. NASA officials stated that many of the materials that the agency relies on are commonly used by both NASA and DOD and can include elements such as tungsten, chromium, and nickel that are used in making alloys. In the view of one NASA official we spoke with, these actions are aligned with the 2010 National Space Policy, which called for agencies to engage with industrial partners to improve processes and effectively manage supply chains, among other things. In addition to these six agencies’ efforts, the Subcommittee has also coordinated an interagency effort to develop a methodology to identify potentially critical materials for the U.S. economy or security, which it has described as an early warning screening. OSTP, DOE, and Interior’s USGS, through their participation as co-chairs of the Subcommittee, have led the effort to develop the early warning screening, with other Subcommittee members providing key input. In March 2016, the Subcommittee published a criticality assessment in which it reported on its progress in developing a screening methodology for critical minerals and the results of the initial application of this methodology. The methodology described in the Subcommittee’s report is the first step in a two-stage process to identify which minerals pose a risk of becoming critical. The Subcommittee screened 78 mineral resources using its methodology and identified 17 minerals as potentially critical. According to the March 2016 report, the next steps for the second stage of the process include (1) developing a prioritized list of a subset of the 17 potentially critical minerals for in-depth investigation, (2) developing individual project plans for those minerals for further study, and (3) carrying out the targeted studies in the next annual cycle. Federal agencies support research that encompasses a range of approaches to address critical materials supply issues, including projects to (1) discover or develop substitutes that can duplicate the unique properties of critical materials, (2) develop new approaches or technologies that minimize the use of critical materials, and (3) develop new approaches or technologies to increase the efficiency of domestic production of critical materials or enable the recycling of specific materials. Figure 4 shows federal activities related to critical materials research. The following federal agencies support research related to critical materials supply: DOE. The Critical Materials Institute (CMI), based at DOE’s Ames Laboratory in Iowa, is a 5-year, $120 million public-private partnership, with partners from other national laboratories, universities, and industry. CMI began operations in June 2013, and its mission is to help ensure supply chains of materials critical to clean energy technologies (see sidebar). CMI’s research efforts focus on diversifying the supply of materials, developing substitute materials, and improving the efficiency of material use and reducing waste, among other efforts. DOE officials told us that CMI collaborates informally with other DOE offices with efforts related to critical materials research. For example, CMI collaborated with DOE’s Advanced Research Projects Agency- Energy, which awarded $40.8 million to 14 projects in the Rare Earth Alternatives in Critical Technologies program to support early stage development of rare earth-free magnetic materials, novel motor designs that reduce or eliminate the need for rare earth materials, and High Temperature Superconductor wires for large-scale wind generators with no rare earth magnets. Another example DOE officials cited was collaboration with the DOE Office of Fossil Energy’s National Energy Technology Laboratory to fund research on the recovery of rare earth elements from coal and coal by-products. According to the Department of Energy, as of May 1, 2016, Critical Materials Institute (CMI) research projects have resulted in 42 invention disclosures, 17 patent applications and 1 licensed technology. One example is the development of a membrane solvent extraction system that aids in the recycling, recovery, and extraction of rare earth materials. The system was developed by researchers at Oak Ridge and Idaho National Laboratories and has been licensed to a U.S. company. According to CMI researchers, the recycling of critical materials from electronic waste has been limited by processing technologies that are inefficient, costly, and environmentally hazardous. The researchers report that this new simplified process, shown in the figure above, eliminates many of these limitations. The technology uses a combination of hollow fiber membranes, organic solvents, and neutral extractants to selectively recover rare earth elements such as neodymium, dysprosium, and praseodymium. In laboratory testing, the membrane extraction system demonstrated the potential to recover more than 90 percent of neodymium, dysprosium, and praseodymium in a highly pure form from scrap neodymium-based magnets. The licensing company has indicated that it intends to apply the technology to recover rare earth elements from old electronics and from its mining claims in the United States. DOD. DOD funds critical materials research both through component agencies that support the entire department and through the Army, Navy, and Air Force research organizations. DOD’s research approach to mitigating the risk associated with the supply of critical materials used in weapon components has varied, but according to officials, critical materials have been studied as part of meeting mission requirements to increase performance and capabilities and to reduce costs of DOD technologies. For example, the Army Research Laboratory collaborated with academic and industrial partners to explore how to resolve the technical barriers to achieving a reliable domestic supply chain for certain rare earth materials. NSF. In fiscal year 2013, NSF started an initiative to encourage and foster research in sustainable chemistry, engineering, and materials to address the interrelated challenges of sustainable supply, engineering, production, and use of chemicals and materials. Examples of research topics in this area include replacing rare, expensive, or toxic materials with earth-abundant, inexpensive, and benign materials; discovering new techniques to facilitate recycling and producing valuable materials; and developing and characterizing low cost, sustainable, and scalable-manufactured materials with improved properties. NSF also supports the Center for Resource Recovery and Recycling, which addresses challenges related to materials recovery and recycling. Researchers from the center developed a method of extracting rare earth elements from drive units and motors of discarded electric and hybrid vehicles. The goal of that work is to recycle rare earth materials that would otherwise be lost and create an alternative source of these materials. Interior. The department’s USGS supports research on nonfuel mineral resources. According to a senior USGS official, a priority area in this research is identifying and characterizing critical mineral resources through activities such as mineral resource assessments, mineral deposit models, and remote sensing exploration techniques. According to the official, the focus of these activities is on domestic mineral resources. In addition to activities in the primary areas described above, federal agencies conduct a wide range of other activities related to the supply of critical materials. Addressing trade issues. USTR plays a key role in the federal government’s efforts to address trade issues. While USTR does not have a specific program or focus area related to critical materials, the agency has worked, in collaboration with other federal agencies and international partners, to address trade issues affecting materials that are critical for a range of industries. For example, USTR led the federal government’s World Trade Organization (WTO) dispute against China’s export restrictions on rare earth materials, tungsten, and molybdenum, resulting in a finding that the export restrictions were inconsistent with China’s WTO obligations, and continues to monitor China’s actions to ensure compliance with the WTO decision. According to USTR officials, the agency also engages in activities to create more transparency about export restraints, such as maintaining ongoing trade dialogues on raw materials and working with other countries within the Organisation for Economic Co- operation and Development to create an inventory of trade restrictions related to raw materials and energy. In addition to USTR’s efforts, the Subcommittee has also played a role in addressing trade issues. For example, in 2013 the Subcommittee requested changes to the Harmonized Tariff Schedule of the United States that according to OSTP officials, provided more granular data on U.S. imports of rare earth materials, among other changes. Similarly, in 2014 the Subcommittee submitted a request for additional changes to the Harmonized Tariff Schedule to provide more granular data on U.S. imports of permanent magnets, among other changes. Coordinating internationally. Federal agencies have coordinated with international partners on critical materials issues through different forums. For example, the EU-US-Japan Trilateral Conference on Critical Materials—which is jointly organized by the European Commission; DOE; and the Japanese Ministry of Economy, Trade, and Industry (METI)—has taken place for 5 consecutive years to exchange information on recent developments in critical materials research and development. According to a DOE official, the first few conferences began with high-level policy discussions, but they have become focused more on researcher-to-researcher exchanges about technology efforts. Another example is the Transatlantic Economic Council, which, in 2011, agreed to launch a cooperative platform on raw materials focusing on five areas: (1) trade cooperation; (2) raw materials data, flows, and information sharing; (3) resource efficiency and recycling; (4) research and development on raw material substitution and reduction; and (5) waste shipment. According to a State Department official, individual federal agencies have led U.S. efforts in each focus area based on their individual missions. For example, USTR led efforts in trade cooperation; USGS led efforts in raw materials data; and DOE led efforts in research and development and recycling, with EPA’s assistance on recycling. Other examples of international coordination that were described to us by federal agency officials include annual reviews of strategic stockpile issues between the United States, Japan, and the Republic of Korea, and U.S. participation in the G7 Alliance on Resource Efficiency. Reviewing and approving mining projects. BLM and the U.S. Forest Service oversee the extraction of minerals on federal land. BLM and Forest Service officials said that their agencies do not consider mineral criticality in their administration of mining projects. When a mining operator submits a plan for a new mine on federal land, either BLM or the Forest Service analyzes the potential impact of the proposed mine on the environment, human health, and cultural resources by conducting an analysis under the National Environmental Policy Act. The National Environmental Policy Act requires federal agencies to evaluate the likely environmental effects of a proposed project using an environmental assessment or, if the project is likely to significantly affect the environment, a more detailed environmental impact statement. From fiscal years 2010 through 2014, BLM and the Forest Service approved 68 hardrock mine plans, 2 of which were for materials that have been identified as critical by DOD—magnesium and manganese. Stockpiling or producing materials. DLA-Strategic Materials is responsible for storing select materials in the National Defense Stockpile to mitigate potential shortages based on certain national emergency planning assumptions. Based on the biennial analyses described previously, DLA-Strategic Materials makes recommendations to acquire specific forms and amounts of materials and then maintains these materials in the stockpile. Additionally, in 2005, DOD invested in a public-private partnership with the leading U.S. beryllium producer to build a new $90.4 million primary beryllium facility in Ohio to ensure current and future availability of high-quality domestic beryllium to meet critical defense needs. The federal government has also been extensively involved in the production, storage, and use of helium since the early part of the 20th century. BLM is responsible for managing the federal helium program, including an underground reservoir for the storage of federally and privately owned helium. The reserve provides a supply of federal helium to such agencies as DOD, DOE, and NASA that rely on the rare gas for research and medical and national defense applications. Further, under DOE’s Isotope Program, DOE produces and distributes radioactive and stable isotopes in short supply for commercial or federal needs. According to DOE officials, the federal government is uniquely suited to produce certain isotopes as production may require recycled or reused national security-related source materials, big accelerators, and research facilities that are only available within the federal government, or it is not profitable for industry to provide the small amounts of isotopes needed for research applications. Promoting technical education and workforce development. DOE’s CMI offers a variety of educational opportunities through several partners, including the Colorado School of Mines, Iowa State University, and the University of Tennessee, Knoxville. For example, in November 2015, CMI announced the development of a three-credit on-line course, offered through Iowa State University for the 2016 spring semester, focused on rare earth materials. According to CMI’s announcement, the course covers a wide range of topics related to rare earth materials, including extraction, separation, preparation and purification; properties related to these materials; and other topics. Additionally, students at the University of Tennessee, Knoxville, have been evaluating conceptual processes for recovery of rare earths from unconventional resources. CMI also provides science and engineering outreach to elementary and high school students through its partnership with the Colorado School of Mines. As described above, NSF supports critical materials research. According to NSF’s research proposal and award policies and procedures guidance, one of the strategic objectives in support of NSF’s mission is to foster integration of research and education through the programs, projects, and activities it supports at NSF awardee organizations. NSF supports development of a strong science, technology, engineering, and mathematics (STEM) workforce by investing in building the knowledge that informs improvements in STEM teaching and learning. NSF expects research proposals to discuss the broader impacts of proposed activities, such as improved STEM education and educator development, and development of a diverse, globally competitive STEM workforce. Recycling and sustainable materials management. Through its Sustainable Materials Management program, EPA engages with public and private stakeholders to advance the productive and sustainable use of materials across their life cycles. According to EPA officials, the agency is in a unique position to lead in the effort of getting industry involved in addressing critical materials consumption. In 2009, EPA published a report outlining measures it could take to promote efforts to manage materials and products on a life cycle basis with a goal of sustainable materials use. Additionally, EPA co- chaired an interagency task force on electronics stewardship, which produced a 2011 National Strategy for Electronics Stewardship that included goals and recommendations, among other things, to improve the ability to recover and market valuable materials from used electronics, especially precious metals and rare earth materials. Supporting commercialization of new technologies. The National Institute of Standards and Technology provides support for industrial adoption of rare earth materials substitutes by providing material measurement science and developing data and models. For example, the institute provides standard reference materials that measure the intensity of magnetism that can be induced by magnetic fields, which is of interest to the permanent magnet industry—a major user of rare earth materials. Additionally, the Materials Genome Initiative—under the National Science and Technology Council’s Subcommittee on the Materials Genome Initiative—is a multiagency initiative designed to discover, develop, and manufacture the next generation of materials to meet national needs. The EU, Japan, and Canada have different approaches to address critical materials supply issues. According to the EU policy documents that we reviewed, the EU has a collaborative, economy-wide approach that incorporates sustainability. According to the government officials that we interviewed, Japan’s approach focuses on securing access to foreign sources and conducting materials science research to bolster industrial competitiveness. According to government reports that we reviewed, Canada encourages resource production by providing tax incentives and improving the efficiency of regulatory reviews. The EU has developed a collaborative, economy-wide approach to addressing the supply of critical materials that incorporates a focus on developing a more sustainable and resource-efficient economy. The EU’s Raw Materials Initiative, which was outlined by the European Commission in its 2008 communication to the European Parliament and Council, has three pillars: (1) ensure access to raw materials from international markets under the same conditions as other industrial competitors, (2) set the right framework conditions within the EU in order to foster a sustainable supply of raw materials from European sources, and (3) boost overall resource efficiency and promote recycling to reduce the EU’s consumption of primary raw materials and decrease the relative import dependence. The Raw Materials Initiative is implemented, in part, through the European Innovation Partnership on Raw Materials (Partnership)—a stakeholder platform that brings together EU countries, companies, researchers, and nongovernmental organizations to promote innovation in the raw materials sector. According to EU officials, the Partnership has defined 95 actions to be carried out both within the EU and internationally, in order to secure the EU supply of raw materials via innovation. In 2014, an independent expert group studied the Partnership model and found that it has been a useful vehicle in bringing partners together with a view to align priorities, leverage investments, and form future partnerships. The group’s report stated that European innovation partnerships have generally been good in ensuring extensive participation of all relevant stakeholders, and they have also created effective channels for the interested actors to become engaged in the partnerships, including through invitations for commitments. Figure 5 shows key information about the Partnership. According to EU officials, the majority of the Partnership’s priorities have been reflected in Horizon 2020, the EU research and development funding program for 2014 to 2020. Horizon 2020 has several broad pillars, one of which is climate action, environment, resource efficiency, and raw materials. According to a European industry association we interviewed, an example of efforts in this area involves trying to find ways to provide more supply for raw materials from the EU. Association officials told us that mining ventures tend to raise significant social opposition, which can diminish potential for getting projects under way. According to the officials, this aspect of the Horizon 2020 program tries to take a social approach to mining by using advanced technology to help address social opposition. This focus on public awareness is also an action area outlined in the Partnership’s 2013 Strategic Implementation Plan. The action area is mostly industry led but is also supported by concerned stakeholders— communities, institutions, and regulatory bodies—at all levels. It aims to first increase public awareness of the benefits and potential costs of raw materials supply and then gain public acceptance and trust by improved communication and transparency, notably during the permitting process and the production cycle (i.e., exploration, mine operation, and after mining). The Partnership states that it will play an important role in meeting the objectives of Resource Efficient Europe—an initiative under the Europe 2020 strategy that supports the shift toward a resource-efficient, low- carbon economy to achieve sustainable growth—by ensuring the sustainable supply of raw materials to the European economy. This illustrates the connection within EU policy between the criticality of certain raw materials and the goal of shifting towards a more resource-efficient economy and sustainable development. This connection is also evident in the second and third pillars of the Raw Materials Initiative, listed above, which focus on sustainability and recycling. As stated in the European Commission’s 2008 communication on the raw materials initiative, the EU views boosting overall resource efficiency as a key part of a path toward a secure supply of raw materials. The Raw Materials Initiative also called for the EU to identify a common list of critical raw materials for the EU’s economy. To develop this list of critical raw materials, the EU set up the Ad-Hoc Working Group on Defining Critical Raw Materials, which comprises experts across government, industry, and academia, as described in the Working Group’s 2010 report. The European Commission, with the Ad-Hoc Working Group, published its first criticality analysis for raw materials in 2010. In that analysis, 14 critical raw materials were identified from a candidate list of 41 nonenergy, nonagricultural materials. In 2013, the commission and the working group, in cooperation with a group of researchers, updated this work and analyzed 54 nonenergy, nonagricultural materials, identifying 20 of them as critical raw materials. EU officials we interviewed stated that they believe that the list of critical materials is useful for prioritizing and identifying relevant research, raising awareness, fostering trade negotiations, and communicating with stakeholders, such as trade and industry groups. According to the officials, the list is also used to incentivize the European production of critical raw materials and facilitate the launching of new mining and recycling activities. In addition to the Ad-Hoc Working Group on Defining Critical Raw Materials, which conducts official criticality analyses, there are a number of stakeholder organizations in the EU and in EU member states that support collaboration between industry, government and academia. Examples include the European Institute of Innovation and Technology Knowledge and Innovation Community on Raw Materials and a future Expert Network on Critical Raw Materials, which will be launched under Horizon 2020 by the European Commission. According to a report on the raw materials strategies of industrialized countries, Japan’s heavy dependence on metal and mineral imports has led it to focus on securing access to foreign sources of materials and exploring substitute materials through materials science research as a way to ensure its continued industrial competitiveness. According to government officials we interviewed, Japan’s METI sets policy for raw material supplies. Officials told us that METI has established a five pillar strategy for the supply of rare metals: (1) promoting initiatives to secure resources overseas, (2) promoting recycling and development of smelting technology, (3) developing resource-saving and substitute materials, (4) stockpiling rare metals, and (5) developing marine resources. According to government officials we interviewed, the Japanese government, through the Japan Oil, Gas and Metals National Corporation (JOGMEC), secures access to critical materials by providing direct funding to exploration and development projects around the world. JOGMEC’s efforts fit into METI’s policy framework under four of the five pillars—it is not involved in developing resource-saving and substitute materials. JOGMEC officials said that a primary aspect of JOGMEC’s critical materials supply efforts is to provide financial and other types of assistance, such as liability protection, to Japanese companies for overseas mineral exploration or development projects. For example, JOGMEC officials said that they can engage in joint venture exploration projects with foreign companies. If the exploration proves fruitful, JOGMEC officials said that they can transfer JOGMEC’s contractual interest in a project to a Japanese company. The officials said that this type of assistance can help to insulate Japanese companies from the impact of price shocks in individual materials markets. JOGMEC is also involved in a seabed exploration project seeking to help verify the feasibility of collecting rare earth materials from the ocean floor. In addition, government officials told us that JOGMEC also engages with experts from across Japan’s domestic industries, including recycling, automobile manufacturing, and telecommunications, to develop a material flow analysis that can pinpoint bottlenecks in the supply chain. JOGMEC started doing this kind of analysis more than a decade ago, more to identify bottlenecks in the supply chain than to provide material supply forecasts, officials told us. The officials told us that currently JOGMEC conducts material flow analyses for 42 materials. Officials also said that JOGMEC’s critical materials efforts reflect a strong relationship between the government and the private sector in Japan. According to JOGMEC officials, investors tend to be more focused on new technologies, whereas the important role for the government is to take a medium-to-long-term view of the trends. According to government officials, Japan has also been a leader in materials science research, and in 2007 the Japanese government began funding the Element Strategy, which was aimed at overcoming the limitation of natural resources by finding alternative materials for new and existing goods and processes. Under the Element Strategy, the Japanese government initiated a research collaboration between industry and academia wherein researchers worked to identify the unknown physical properties of all the elements in the periodic table in order to use each element to the fullest extent possible. In 2012, the Japanese government began a successor research and development program, which has been funded for 10 years. Figure 6 shows key information about Japan’s Element Strategy. Canada’s focus on raw materials is to attract investment in its mining sector through tax incentives, research, and increased efficiency of regulatory reviews. Officials from Natural Resources Canada, the government ministry responsible for natural resources, energy, minerals and metals, forests, earth sciences, mapping, and remote sensing, stated that critical raw materials are important in the context of leveraging opportunities for economic development through the production and export of mineral products. According to a Canadian report to the United Nations (UN) Commission on Sustainable Development, Canada’s mining sector plays an important part in the overall economic development of Canada. According to that report, provincial governments are largely responsible for the exploration, development, and extraction of mineral resources and the construction, management, reclamation, and closeout of mine sites in their jurisdiction. The report also states that the Canadian federal government’s responsibilities mainly pertain to international affairs, trade, and investment, including development assistance; fiscal and monetary policy; science and technology; and regulation of all activities related to mineral development in the territory of Nunavut. According to officials from Natural Resources Canada, the Canadian federal, provincial, and territorial governments share responsibilities for the protection of the environment, and proposed mine developments. Projects usually require separate federal and provincial environmental impact assessments and regulatory approvals. Canada has taken a number of actions at both the federal and provincial levels to encourage investment in the mining sector, according to officials from Natural Resources Canada. According to officials we interviewed and reports we reviewed, tax incentives are a way the Canadian government encourages investment in the mining sector. According to officials from Natural Resources Canada, junior mining companies have no regular source of income and often have difficulty raising capital to finance their exploration and development activities. According to officials from Natural Resources Canada, Canada’s flow-through share (FTS) mechanism allows principal business corporations, particularly junior mining companies, to obtain equity financing for mineral exploration and development in Canada, whereby a mineral exploration or mining company can transfer or flow-through the tax deductions arising from its eligible exploration expenses to the FTS investors, giving them the benefit. In addition, investors can receive an additional 15 percent Mineral Exploration Tax Credit (METC) for qualifying surface or above-surface exploration expenditures. According to information from the Natural Resources Canada website, for the individual investors, the advantages of investing in an FTS can be that they (1) receive a 100 percent tax deduction for the amount of money they invested in the shares, plus the 15 percent METC in the case of an eligible expense, and (2) may see the value of their investment appreciate in the event of successful exploration. According to the report to the UN Commission on Sustainable Development, a number of provinces also have a tax credit that harmonizes with the federal package, which makes individual investors’ net costs of FTS investment less than half of their initial amounts. Another example of Canada’s investment in the mining sector is through its research investments. According to officials from Natural Resources Canada, Canada invested C$100 million (U.S. $78 million) over 7 years (2013 through 2020) in the Geo-mapping for Energy and Minerals program to develop new energy and minerals resources and promote responsible land development. Officials told us that Canada also dedicated C$23 million (U.S. $18 million) over 5 years (starting in 2015- 2016), to stimulate the technological innovation needed to separate and develop rare earth elements and chromite. In addition to providing financial incentives for investing in the mining industry, the Canadian government has also focused on improving the efficiency of regulatory reviews of mining and other major projects. In 2007, the Canadian government launched the Major Projects Management Office (MPMO) Initiative to improve the effectiveness and efficiency of the federal regulatory review process, while ensuring careful consideration of environmental protection, consultation obligations, and industry competitiveness. According to a 2012 report from Natural Resources Canada on its evaluation of the MPMO Initiative, there are eight participating departments and agencies that have agreed to implement the initiative both individually and in collaboration: the Aboriginal Affairs and Northern Development Canada, the Canadian Environmental Assessment Agency, Fisheries and Oceans Canada, Environment Canada, Transport Canada, the Canadian Nuclear Safety Commission, the National Energy Board, and Natural Resources Canada. The report states that through the initiative, the MPMO was established to conduct a range of activities that according to Natural Resources Canada officials, were intended to improve the accountability, transparency, timeliness, and predictability of the federal regulatory review process for major resource projects. The report further states that the mandate of the MPMO is to provide (1) major project management and coordination and (2) policy leadership, including problem-solving of short- to medium-term issues. In the area of project management and coordination, the MPMO’s role includes coordinating the development of project agreements that include target timelines, ongoing project and performance monitoring, tracking and reporting, and administering the MPMO Tracker—a publicly accessible web-based monitoring system for major resource projects that can be updated in real time. The 2012 evaluation of the MPMO Initiative by the Canadian government covered a number of issues, including the achievement of expected outcomes and demonstration of the efficiency and economy of the permitting process for mining projects. According to the report on the evaluation, the Canadian government found that the integration and federal coordination of environmental assessments and regulatory reviews increased under the initiative. In addition, as noted in the report, the evaluation also found that transparency and accountability of the federal regulatory process within the Canadian government increased significantly through the initiative. According to the evaluation, the Canadian government timelines were viewed by internal and external stakeholders to be improving because of increased capacity and improved integration and coordination, but efforts to quantitatively demonstrate to what extent these improvements had translated into increased overall predictability of the Canadian government’s permitting process were limited. According to officials from Natural Resources Canada, Canada’s 2015 Economic Action Plan proposed providing C$135 million (U.S. $105 million) over 5 years, (starting in 2015-16) to continue to improve the efficiency and effectiveness of project approvals through the MPMO Initiative. Figure 7 shows key information about Canada’s MPMO. The federal government’s approach to addressing critical materials supply issues has areas of strength, according to experts we surveyed, but is not consistent with selected key practices for enhancing and sustaining interagency collaboration and has other limitations. For example, federal government efforts to assess risks and conduct critical materials research have been identified by experts as strengths. However, the federal government’s approach to addressing critical materials supply issues has not been consistent with selected key practices for interagency collaboration, such as ensuring that agencies’ roles and responsibilities are clearly defined. In addition, the federal critical materials approach faces other limitations, including data limitations and a focus on only a subset of critical materials, a limited focus on domestic production of critical materials, and limited engagement with industry. Experts that we surveyed identified areas of strength in the federal government’s approach to addressing critical materials supply issues. The most commonly cited strengths were in federal efforts to identify and assess risks in certain industrial sectors and to conduct research related to critical materials. Among the strengths cited by experts in identifying and assessing risks was USGS’s collection of data to support assessing critical materials supply risks. In particular, experts responding to the first round of our survey identified efforts by USGS to compile and provide data on mineral deposits and supply and demand for minerals as strengths. One expert lauded USGS data and knowledge about the distribution of critical materials throughout the United States and the rest of the world. Another strength cited by an expert in the area of identifying and assessing risks included DLA-Strategic Materials’ critical materials assessments. In the area of conducting research related to critical materials, experts cited DOE’s CMI as a strength in the federal approach to developing methods that address the supply of critical materials, primarily rare earth materials. For example, one expert stated that the formation of CMI was a very positive step to address specific material shortages (rare earth materials, especially) from a scientific perspective, and to develop methods for using less material in specific applications, develop substitutes, and improve recycling of such materials. We found that the research funded by DOE’s CMI has largely focused on projects related to rare earth materials. Specifically, according to DOE officials, 30 out of 34 of CMI’s funded projects as of April 2016 have been related to rare earth materials. In addition, experts in the second-round survey rated as adequate certain available data collected by the federal government in its effort to identify and assess risks with regard to the supply of critical materials. For example, when asked in the second-round survey to rate the adequacy of different types of critical materials data available, a majority of experts who responded described available data on (1) actual U.S. domestic production of materials and (2) resource potential and inventory for sources or deposits of materials located within the United States as somewhat or very adequate, as shown in table 1. The federal government’s approach to addressing critical materials supply issues is not consistent with selected key practices that we have previously identified that can help enhance and sustain interagency collaboration. Collaboration can be broadly defined as any joint activity that is intended to produce more public value than could be produced when the organizations act alone. As described above, a number of federal agencies conduct activities related to critical materials supply across the primary areas of effort—assessing risk and supporting research—as well as a range of other activities. In our April 2015 guide to evaluating and managing fragmentation, overlap, and duplication, we define fragmentation as those circumstances in which more than one federal agency, or organization within an agency, is involved in the same broad area of national need, and opportunities exist to improve service delivery. This definition applies concerning federal agencies’ critical materials activities, with more than one agency involved in the same broad area of national need. However, as shown by the agencies’ critical materials activities described above, agencies’ activities sometimes differ in meaningful ways or leverage the efforts of other agencies. In this context, we have reported that collaboration is an option that can reduce or better manage fragmentation of federal programs. As an interagency working group and according to its charter, the Subcommittee is to facilitate a strong, coordinated effort across its member agencies on critical minerals activities. However, we identified aspects of the Subcommittee’s efforts, which represent the federal approach, that are not consistent with key practices for enhancing and sustaining interagency collaboration. These practices include agreeing on roles and responsibilities; establishing mutually reinforcing or joint strategies; and developing mechanisms to monitor, evaluate, and report on results. One practice we identified that can help enhance and sustain interagency collaboration is agreeing on roles and responsibilities, including leadership. We reported that collaborating agencies should work together to define and agree on their respective roles and responsibilities, including how the collaborative effort will be led. In doing so, agencies can clarify who will do what, organize their joint and individual efforts, and facilitate decision making. Consistent with this practice, OSTP, DOE, and USGS have taken key roles as co-chairs of the Subcommittee. However, there are a number of Subcommittee member agencies, such as Education, Labor, EPA, DHS, and USDA, that are designated as members in the Subcommittee charter but do not have clear roles within the Subcommittee’s efforts and have had limited or no involvement in the Subcommittee’s work on critical materials. For example: EPA officials stated that EPA is in a unique position to lead in certain government-wide efforts, such as electronic waste recycling, that could be important for facilitating the recycling and reuse of critical materials. However, one EPA official stated that EPA viewed its role on the Subcommittee as limited. Specifically, EPA has had some involvement as a member of the Subcommittee but has not been coordinating with the Subcommittee on federal efforts to facilitate the recycling and reuse of critical materials. EPA officials stated that the Subcommittee’s activities were being driven primarily by other agencies, and EPA officials did not view the Subcommittee’s activities as being focused on sustainable materials management—an area where EPA has expertise. Education and Labor lead federal efforts on education and workforce issues. A 2013 National Academies of Sciences, Engineering, and Medicine report on workforce trends in the U.S. energy and mining industries highlighted the role that Education and Labor could play in helping to address education and workforce issues related to those industries, which include industries related to the supply of critical materials. Among the report’s recommendations was for Education to collaborate with Labor, state departments of education, and national industry organizations to convene workshops with industry, government, and educational leaders. However, although Education and Labor are designated as members in the Subcommittee charter, neither has shown that it ever participated in Subcommittee meetings. Officials from Labor stated that they were unaware of the Subcommittee and their agency’s designation as a member on the Subcommittee until we contacted them during the course of this review. Officials from Education stated that they were unable to identify anyone who participated on the Subcommittee and that there were no records of anyone from Education having participated. USDA’s Forest Service reviews and approves mine plans for operations that have included the mining of critical materials on the lands it manages. Although USDA is designated a member of the Subcommittee in its charter, according to agency officials, USDA did not have representation on the Subcommittee until August 2015 when a mining operator applying for a permit informed Forest Service officials about the Subcommittee. Forest Service officials told us that because they now know about their role on the Subcommittee, they plan to attend meetings regularly and be more involved in activities. DHS analyzes U.S. dependence on foreign infrastructure, including foreign sources of critical materials. The agency is designated as a Subcommittee member in the charter; however, DHS officials stated that, until we contacted them during the course of our review, no one had been tasked to represent the agency on the Subcommittee. A DHS official told us that he is now on OSTP’s list of agency contacts for the Subcommittee. DHS analyses of foreign infrastructure could help to inform the analysis that the Subcommittee has developed for the early warning screening system, as well as DOD’s analyses for its stockpiling assessments. Some experts we surveyed also noted the lack of clarity in agencies’ roles and responsibilities with regard to federal coordination efforts in addressing the supply of critical materials. Sixteen out of 36 experts responding to our survey indicated that the roles and responsibilities of government agencies with respect to critical materials were not very clearly defined or not defined at all. For example, one expert stated that too many agencies have their own agendas and therefore the federal effort is not coordinated. Relatedly, another expert noted that Commerce does not have a clearly defined role to support critical materials important to the economy. Our work has shown that although collaborative mechanisms differ in complexity and scope, they all benefit from certain key features, including the clarity of roles and responsibilities and ensuring that the relevant participants are included in the collaborative effort. Specifically, key practices call for participating agencies to consider clarifying their roles and responsibilities and whether all relevant participants have been included. We have reported that clarity about roles and responsibilities can be codified through laws, policies, memorandums of understanding, or other requirements. By agreeing on and clearly defining roles and responsibilities of their members, collaborating agencies clarify which agency will do what, organize their joint and individual efforts, and facilitate decision making. Furthermore, experts we contacted for our 2012 report on key considerations for implementing interagency collaborative mechanisms said, among other things, that it is helpful when the participants in a collaborative mechanism have full knowledge of the relevant resources in their agency and the ability to commit these resources and make decisions on behalf of the agency. We noted earlier that the EU has created a mechanism to bring together relevant stakeholders in the area of critical materials to align priorities, leverage investments, and form future partnerships. According to OSTP officials, the Subcommittee’s efforts are generally based on the level of involvement and resources of member agencies, with certain agencies taking the lead for certain activities. However, OSTP, as part of the Subcommittee’s leadership, did not point to efforts made to engage member agencies in more active participation in the Subcommittee. By taking steps to actively engage all member agencies in its efforts and clearly define roles and responsibilities, the Subcommittee will have more reasonable assurance that it can effectively marshal the potential contributions of all member agencies to take full advantage of their expertise and resources to help identify and mitigate critical materials supply risks. Moreover, the 1980 Act outlines a range of policies to promote an adequate and stable supply of materials, including assessing the availability of technically trained personnel, as well as supporting research related to recovery and recycling of materials, among others. In addition to enhancing interagency collaboration on critical materials activities, actively engaging all member agencies may also present an opportunity for the Subcommittee to more fully incorporate the policies of the 1980 Act into the federal approach for addressing critical materials supply issues. Another key practice we identified that can enhance and sustain interagency collaboration is establishing mutually reinforcing or joint strategies designed to help align activities, core processes, and resources to achieve a common outcome. However, federal critical materials efforts are not guided by joint strategies to achieve a common outcome. The Subcommittee’s charter outlines general areas of effort for its work but does not specify the outcome or outcomes that the Subcommittee plans to achieve. The Subcommittee’s member agencies have not worked together to develop joint strategies to guide their activities. OSTP officials indicated that member agencies are responsible for determining which activities to undertake based on the agencies’ resources and mission. The Subcommittee does not direct member agency activities, and there has been no discussion within the Subcommittee of creating a joint strategy. Experts also identified issues with the extent to which the federal approach to addressing critical materials supply issues supports achieving desired outcomes in response to our survey. For example, 28 out of 36 experts responding to our survey indicated that the federal government’s objectives with respect to critical materials were not clearly defined or not defined at all, and 20 out of 36 indicated that the extent to which federal agencies’ activities are mutually reinforcing with regard to critical materials was small or nonexistent. We have previously reported that to achieve a common outcome, collaborating agencies need to establish strategies that work in concert with those of their partners or are joint in nature. Developing joint strategies can help align partner agencies’ activities, core processes, and resources to accomplish a common outcome. Developing joint strategies to articulate common outcomes and identify member agencies’ efforts could help the Subcommittee better coordinate agencies’ critical materials activities to ensure that they are mutually reinforcing. An additional key practice we identified that can enhance and sustain interagency collaboration is developing mechanisms to monitor, evaluate, and report results. Federal agencies engaged in collaborative efforts need to create the means to monitor and evaluate their efforts to enable them to identify areas for improvement. However, the Subcommittee does not have a mechanism to monitor and evaluate progress across all areas of its activities. OSTP officials did not think that monitoring the progress of activities was the Subcommittee’s responsibility because individual activities are funded by member agencies, and therefore those agencies would be responsible for tracking progress. However, without a mechanism to monitor and evaluate its efforts, the Subcommittee may be missing an opportunity to fulfill a policy of the 1980 Act, which calls for establishing a mechanism to evaluate federal materials programs. Also, key practices call for reporting on the activities of agencies engaged in collaborative efforts to help key decision makers within the agencies, as well as clients and stakeholders, obtain feedback for improving both policy and operational effectiveness. OSTP officials stated that they provide reports as necessary on specific Subcommittee activities, in line with the reporting practices for other NSTC subcommittees. For example, as noted earlier, in March 2016, the Subcommittee published a report on its progress in developing a screening methodology for critical minerals and the results of the initial application of this methodology. However, since it was established in 2010, the Subcommittee has not reported periodically on the progress of all of its efforts to address critical materials supply issues. According to OSTP officials, the Subcommittee leaves regular reporting on the progress of activities to the member agencies as part of their standard agency oversight measures. However, there is no member agency that is responsible for reporting on all of the Subcommittee’s efforts. Periodic reporting on the progress of the Subcommittee’s activities could help key decisionmakers within the member agencies and Congress, as well as other stakeholders, to obtain feedback for improving both policy and operational effectiveness. We identified other limitations in the federal approach to addressing critical materials supply issues through our expert survey, review of the Subcommittee’s criticality assessment, and analysis of other information we collected. These include limitations in the federal government’s engagement with industry to identify U.S. critical material needs, with data to identify and assess risks and the Subcommittee’s focus on only a subset of critical materials, and in the Subcommittee’s focus on domestic production of critical materials. The federal government’s engagement with industry on an economy-wide basis to identify critical materials supply issues has been limited, according to our analysis and responses from the experts we surveyed. Although DOE and DOD have engaged with industry stakeholders in the clean energy and defense sectors through their efforts to address critical materials supply issues, we found that there has been limited federal government engagement with industry stakeholders outside of energy and defense. For example, officials that we interviewed from the semiconductor industry told us that they have concerns about the availability of certain gases that are critical to the semiconductor manufacturing process. However, company officials stated that they had not spoken with anyone within the federal government about their concerns; one trade association official stated that the organization did not know where in the federal government it should go to raise these concerns and that it was not aware of mechanisms to communicate information about supply disruptions to the government. Additionally, in response to our survey, a majority of experts, 25 out of 36, indicated that the level of attention that the federal government has paid to the criticality of materials important to industrial sectors outside of energy and defense was very or somewhat inadequate. In comparison, slightly more than half of the experts we surveyed, 19 out of 36, indicated that the level of attention paid to materials important to sectors related to energy and defense was very or somewhat adequate. Commerce is responsible for soliciting information from a range of industry sectors to help identify and assess cases of materials needs. The 1980 Act requires Commerce, in consultation with other agencies, to continually identify and assess cases of materials needs, as necessary, to ensure an adequate and stable supply of materials to meet national security, economic well-being, and industrial production needs. In the early 1980s after the legislation was enacted, Commerce conducted two such assessments on critical materials related to the aerospace and steel industries. Both assessments were conducted by Commerce’s Minerals and Materials Task Force, which was chaired by the Director of ITA’s Office of Strategic Resources. However, Commerce officials could not identify any recent assessments on critical materials by the department. Commerce’s Office of Technology Evaluation within the Bureau of Industry and Security conducts industrial base surveys and assessments, but according to Commerce, those assessments are focused exclusively on the U.S. defense industrial base. Within Commerce, nondefense assessment functions reside in ITA. ITA held two industry roundtables related to critical materials, one in 2009 focused on rare earth materials and another in 2012 that was intended to help inform the Subcommittee’s assessment of critical minerals. According to ITA officials, roundtables are convened periodically, often when there is something new or important affecting industry, such as the concerns about the decreased global supply of rare earth materials. According to the officials, ITA’s role on the Subcommittee is to provide support by sharing and exchanging information from an industry and trade perspective. The officials indicated that ITA has no specific plans to conduct additional roundtables to identify industry concerns related to critical materials supply. ITA officials also stated that it was not within the purview of ITA’s industry-specific offices—Office of Energy and Environment Industries, Office of Health and Information Technology, and Office of Transportation Machinery—to meet with industry to engage on issues related to critical materials supply. ITA officials stated that they were not aware of Commerce’s responsibilities under the 1980 Act prior to our review. Proactive engagement with a range of industry stakeholders to identify critical materials needs was a feature we identified in other countries’ or regions’ approaches to address critical materials supply issues. For example, the Japanese government’s approach features close collaboration between government and industry through engagement with industrial stakeholders to develop materials flow analyses that can identify critical materials and pinpoint bottlenecks in supply chains. Because Commerce is not engaging with industry stakeholders across a range of industrial sectors to identify materials of concern, it may not have the comprehensive, current information it needs to fulfill its responsibilities under the 1980 Act to continually identify and assess cases of materials needs. The federal approach to addressing critical materials supply issues is limited by the inadequacy of certain data and a focus on a subset of critical materials. While experts we surveyed were generally positive about data on domestic production, resource potential and inventory, and imports and exports associated with the supply of critical materials, as described earlier, a majority of them found available data to identify and assess risks associated with the supply of critical materials to be very or somewhat inadequate. As shown in table 2, a majority of experts who responded to the survey thought that the availability of data was inadequate in a number of areas, including data to identify and assess risks on (1) actual foreign production; (2) the resource potential of critical materials in other parts of the world, including in and below the oceans; and (3) the quantity of material recycled. In addition, the Subcommittee’s March 2016 criticality assessment reporting on the development and initial application of a screening methodology represents an important step toward developing an early warning system. However, the report focuses on a subset of potential critical minerals, which it defined as nonfuel resources—elements or compounds—that are obtained by mining or refined from mined products, and in some cases includes such substances at various stages of processing. According to the Subcommittee’s report, the subset of minerals assessed in this initial screening was determined by the availability of suitable and consistent data. The report noted that, in addition to limitations of scope, a significant weakness common among all known criticality assessments is that they are not updated regularly, likely because of the complexity of the models employed, lack of necessary data, or lack of resources needed to perform such updates. Relatedly, the Subcommittee’s 2010 charter established that one of the functions of the Subcommittee would be to develop and periodically update methods for assessing the criteria for material designations as critical or strategic in the short, medium, and long terms, including an early warning mechanism for emerging critical or strategic materials. However, the Subcommittee’s 2016 charter narrowed this function to implement and, as necessary, update the methodology developed cooperatively by its member agencies for dynamically assessing mineral criticality and for signaling emerging critical or strategic minerals—notably replacing the word material with mineral. The Subcommittee’s focus on minerals excludes other materials that are important to industry and federal scientific research, such as rare gases like neon and argon. For instance, we learned from industry officials we interviewed that beginning in 2014 during the conflict between the Ukrainian government and Russian-backed separatist groups, there was a decrease in the global supply of neon gas that led to a 20-fold price increase. Neon is generally produced as a by-product of steelmaking, and most of the global supply of neon comes from Ukraine and Russia. Neon is used for many industrial and research applications, including in the medical field and in the semiconductor industry to design computer chips. For instance, an NIH official stated that the agency found out about the decreased global supply of neon through one of its grantees that needed the gas for medical research. According to the NIH official, the decreased supply of neon gas has resulted in researchers rationing the gas, which restricts research activities. The official stated that in one case the agency provided supplemental funds to assist a researcher in conducting experiments using alternative laser systems that did not depend on neon gas, but the experiments were unsuccessful using those lasers. According to the NIH official, federal intervention to ensure the availability of neon and other rare gases would improve the agency’s ability to advance its mission. The Subcommittee’s criticality assessment report notes that the development of the screening methodology and the regular publication of its results address aspects of the 1980 Act. As noted above, the 1980 Act calls for the creation of early warning systems for materials supply problems, and defines “materials” as substances, including but not limited to minerals, needed to supply the industrial, military, and essential civilian needs of the United States. The Subcommittee’s report indicated that additional minerals could be included in the early warning screening in the future as additional data become available. However, the Subcommittee has not developed a plan or strategy to prioritize additional materials needed by industry and federal research and to determine how to obtain data that would allow them to be included in the early warning screening in the future. One potential mechanism for obtaining data on additional materials is the North American Industry Classification System, which is the standard used by federal statistical agencies—several of which are part of Subcommittee member agencies (e.g., Labor’s Bureau of Labor Statistics and DOE’s Energy Information Administration)—in classifying business establishments to collect, analyze, and publish statistical data related to the North American economy. The system is reviewed through an international process every 5 years and uses a production-oriented conceptual framework to group establishments into industries based on the activity in which they are primarily engaged. Establishments using similar raw material inputs, similar capital equipment, and similar labor are classified in the same industry, so that establishments that do similar things in similar ways are classified together. The current 2012 industry classifications in use under this system were issued in 2011. The U.S Economic Classification Policy Committee is reviewing comments on its recommendations for the 2017 revisions to the system, after which it will begin the process of soliciting proposed revisions for implementation in 2022. During the revision process, the Economic Classification Policy Committee solicits and evaluates requests for revisions to the North American Industry Classification System. A Labor official said that if there is a need to classify segments of industries at a more granular level, it would be important to communicate these needs for the next revision cycle. For example, there is one North American Industry Classification System code that covers all industrial gases. If the Subcommittee found that there was the need for additional information on a specific industrial gas, such as neon, it could use the upcoming revision process to request a change to incorporate additional granularity into the system to differentiate between different industrial gases. This would be similar to the changes the Subcommittee worked with the United States International Trade Commission to incorporate in the Harmonized Tariff Schedule to provide more visibility into the imports of specific rare earth materials and permanent magnets. Since the publication of the Subcommittee’s criticality report, the Subcommittee narrowed its charter to focus on minerals. In narrowing the charter, the Subcommittee is missing the opportunity to fulfill in its early warning screening methodology one of the policies of the 1980 Act, which applies to all critical materials. By taking the steps necessary to broaden future applications of the early warning screening methodology to include potentially critical materials beyond minerals, such as a plan or strategy for prioritizing the materials, the Subcommittee could better work with member agencies to address existing data limitations and broaden the scope of the early warning system to better achieve the policy outlined in the 1980 Act. Experts we surveyed noted the importance of domestic production in addressing the supply of critical materials but also indicated that the federal government’s approach to date has included a limited focus on domestic production. The 1980 Act calls for the coordination of federal agencies to facilitate the availability and development of domestic resources to meet critical materials needs, and the assessment of federal policies that affect all stages of the materials cycle, including mining. A majority of experts who responded to the survey, 24 out of 36, indicated that the federal government should play a major role in encouraging the domestic production of critical materials, and 19 out of 36 indicated that federal efforts to encourage domestic production of critical materials to address supply issues are somewhat or very inadequate. As shown in table 3, experts we surveyed identified several factors with the potential to limit domestic production of critical materials. As described above, one aspect of domestic production of critical materials is the review and approval by federal agencies of mining projects on federal land. As shown in table 3, most experts we surveyed indicated that the length of the permitting process for new mines has the potential to limit the domestic production of critical materials. In January 2016, we reported on the permitting process involving BLM and the Forest Service and found, among other things, that agency officials felt that there was limited or ineffective interagency coordination and collaboration during the mine plan review process. We reported that officials in nine BLM and two Forest Service locations said that coordination and collaboration had been limited in both quantity and quality and had resulted in adding from 2 months to 3 years to the review process. As part of the review process, BLM and the Forest Service need to coordinate and collaborate with other federal agencies, state agencies, and Native American tribes on issues such as assessing impacts to water quality, wildlife, and cultural resources. However, BLM and Forest Service officials said it can be difficult to do. For example, Forest Service officials said that a federal agency delayed the review process for one mine plan because the agency did not provide the necessary data in a timely fashion. As a result, Forest Service officials had to redo some analyses needed for the mine plan’s environmental impact statement, which added time to the review process. To help address this key challenge, some officials said that they have developed memorandums of agreement with state agencies, are holding regular meetings with these state agencies, and the mine operators, and are communicating and consulting with tribes. As noted above, other countries’ or regions’ approaches to addressing critical materials supply issues have incorporated taking steps to facilitate domestic production of materials. For example, Canada’s MPMO Initiative was established to improve the accountability, transparency, timeliness, and predictability of Canada’s federal regulatory review process for major resource projects, and internal and external stakeholders believe that federal project review timelines have improved because of better coordination. The Canadian government has also taken steps to provide tax incentives for domestic production. Similarly, as described above, fostering communication with stakeholders related to new mining projects has been a facet of the EU approach to facilitating domestic production of critical materials. Although its charter calls for the Subcommittee to review and analyze global and domestic policies that affect the supply of critical and strategic minerals, the Subcommittee has addressed these issues only to a limited degree. As noted above, the Subcommittee has done some work to look at trade issues to critical materials through its work with USTR and other member agencies to address China’s export restrictions through dispute settlement at the WTO. However, the Subcommittee has not focused on increasing the supply of critical materials through facilitating domestic production. Until recently, the Forest Service was not an active participant on the Subcommittee, and according to BLM officials we interviewed, BLM has not participated on the Subcommittee. There are a number of global and domestic policies related to the supply of critical materials that the Subcommittee could review and analyze, including examining the approaches taken by other countries or regions to facilitate domestic production by, for example, improving coordination and streamlining the mine-permitting process. By examining the approaches taken by other countries or regions to facilitate domestic production of critical materials, the Subcommittee could determine if there are any lessons learned that could be applied to the United States. The availability of certain materials is essential for national security, economic well-being, and industrial production. Recognizing this need, Congress passed the 1980 Act to promote an adequate and stable supply of needed materials. Although this legislation has been in place for over 30 years, a number of the key federal activities we examined that are focused on addressing critical materials supply risk did not begin until after 2010, when China tightened its export restrictions on rare earth materials. U.S. government agencies are now carrying out some of the policies outlined in the 1980 Act, and experts have identified strengths in agencies’ efforts to assess critical materials supply risks and mitigate those risks through research activities. Although the Subcommittee is to facilitate a strong, coordinated effort across its member agencies on critical minerals activities, its efforts to coordinate agencies’ activities are not consistent with selected key practices for enhancing and sustaining interagency collaboration. The Subcommittee has not taken steps to actively engage all member agencies in its efforts and has not clearly defined the roles and responsibilities of member agencies. By ensuring that all relevant member agencies are engaged in its efforts and have agreed on and clearly defined roles and responsibilities, the Subcommittee will have more reasonable assurance that it can effectively marshal the potential contributions of all member agencies to take full advantage of their expertise and resources in addressing critical materials supply issues. The Subcommittee also has not developed joint strategies to articulate common outcomes and identify contributing agencies’ efforts, or developed a mechanism to monitor, evaluate, and periodically report on the progress of these efforts. Developing joint strategies to articulate common outcomes and identify member agencies’ efforts could help the Subcommittee better coordinate agencies’ critical materials activities to ensure that they are mutually reinforcing. In addition, developing a mechanism to monitor, evaluate, and periodically report on the progress of member agencies’ efforts could help the Subcommittee fulfill a policy of the 1980 Act, which calls for the establishment of a mechanism for the evaluation of federal materials programs. The U.S. government is also missing other key opportunities to address critical materials supply risks because of its limited engagement with industry to continually identify and assess materials needs, a focus on a subset of critical materials, and a limited focus on developing domestic production capabilities. The Subcommittee has taken an important step toward developing an early warning system for critical minerals as called for by its charter, but it excludes nonmineral materials that may be important to industry and federal research. Currently, the Subcommittee does not have a documented plan or strategy to prioritize potentially critical materials beyond minerals and determine how to obtain data on such materials that would allow them to be included in the early warning screening in the future. By taking the steps necessary to broaden its future applications of the early warning screening methodology to include potentially critical materials beyond minerals, including a plan or strategy for prioritizing such materials, the Subcommittee could better work with member agencies to address existing data limitations and broaden the scope of the early warning system to better achieve the policy outlined in the 1980 Act. The Subcommittee is also not taking steps to identify opportunities to facilitate domestic production as a way to mitigate critical materials supply risks. As provided for by the Subcommittee’s charter, examining how other countries or regions, such as Canada and the EU, are improving coordination and streamlining the mine-permitting process could help the Subcommittee determine if there are any lessons learned that could be applied to the United States. Finally, Commerce has not engaged with industry stakeholders to solicit information across a range of industrial sectors. While Commerce has coordinated with industry at certain times or on specific issues, these coordination efforts have been ad hoc and have generally focused on the defense industrial base. As a result, Commerce may not have the comprehensive, current information it needs to fulfill its responsibilities under the 1980 Act to continually identify and assess cases of materials needs. To enhance the ability of the Executive Office of the President to coordinate federal agencies to carry out the national materials policy outlined in the 1980 Act, we recommend that the Director of the Office of Science and Technology Policy, working with the National Science and Technology Council’s Subcommittee on Critical and Strategic Mineral Supply Chains and agency leadership, as appropriate, take the following five actions: To strengthen the federal approach to addressing critical materials supply issues through enhanced interagency collaboration, the Subcommittee should agree on and clearly define the roles and responsibilities of member agencies and take steps to actively engage all relevant federal agencies in the Subcommittee’s efforts; develop joint strategies that articulate common outcomes and identify contributing agencies’ efforts; and develop a mechanism to monitor, evaluate, and periodically report on the progress of member agencies’ efforts. To broaden future applications of the early warning screening methodology, the Subcommittee should take the steps necessary to include potentially critical materials beyond minerals, such as developing a plan or strategy for prioritizing additional materials for which actions are needed to address data limitations. To enhance the federal government’s ability to facilitate domestic production of critical materials, the Subcommittee should examine approaches other countries or regions are taking to see if there are any lessons learned that can be applied to the United States. To fulfill the role assigned to it under the 1980 Act, the Secretary of Commerce should engage with industry stakeholders and continually identify and assess critical materials needs across a broad range of industrial sectors. We provided a draft of this report to USDA, Commerce, DOD, Education, DOE, HHS, DHS, Interior, Justice, Labor, State, Treasury, EPA, NASA, NSF, CEQ, NEC, NSC, OMB, OSTP, and USTR for review and comment. We received the following comments: OSTP provided written comments, which are reproduced in appendix III. Of the five recommendations directed to it, OSTP neither agreed nor disagreed with four of the recommendations, but expressed some concerns with three of the recommendations as described below, and concurred with the fifth recommendation. Commerce provided written comments, which are reproduced in appendix IV. Specifically, in its comments Commerce stated it agreed with the recommendations and that it will consult with other agencies in order to develop an action plan with details on implementation. USDA provided written comments, which are reproduced in appendix V. USDA stated that it generally agreed with the draft report, stating that it supported the Subcommittee and that it agreed that there are limitations, including limited engagement with industry and limited focus on domestic production. USDA did not comment on the recommendations. In an email from an audit analyst in its Office of the Chief Financial Officer, DOE provided general comments, which we discuss below. Commerce, DOD, DOE, Interior, NASA, and USTR provided technical comments, which we incorporated as appropriate. Officials from Education, HHS, DHS, Justice, Labor, State, Treasury, EPA, NSF, and NSC stated via email that they had no comments on the report. An NEC official stated that NEC had no comments on the report. CEQ and OMB did not provide comments. Additionally, we provided a draft of this report to Natural Resources Canada, the European Commission Directorate-General for Internal Market, Industry, Entrepreneurship and Small and Medium-sized Enterprises, METI, and Japan’s Ministry of Education, Culture, Sports, Science and Technology for their views and comments on the completeness and accuracy of GAO’s information on their programs and practices. Officials from the EU and Canada provided technical comments via email, which we incorporated as appropriate. Officials from Japan stated in emails that they had no comments on the report. In its written comments, OSTP neither agreed nor disagreed with our first three recommendations that the Subcommittee should (1) agree on and clearly define the roles and responsibilities of member agencies and take steps to actively engage all relevant federal agencies in the Subcommittee’s efforts; (2) develop joint strategies that articulate common outcomes and identify contributing agencies’ efforts; and (3) develop a mechanism to monitor, evaluate, and periodically report on the progress of member agencies’ efforts. In its comments, OSTP stated that the roles and responsibilities of member agencies are defined by their existing missions and that further specification of roles and responsibilities within the context of the Subcommittee is either redundant, if aligned with agency missions, or may raise confusion if not. However, as we state in the report, there are a number of Subcommittee member agencies that do not have clear roles within the Subcommittee’s efforts and have had limited or no involvement in the Subcommittee’s work on critical materials. By clearly defining roles and responsibilities within the context of the Subcommittee, member agencies could organize their joint and individual efforts, and facilitate decision making. Moreover, more actively engaging all member agencies by clearly defining roles and responsibilities and identifying contributing activities could help the Subcommittee more fully incorporate the range of policies of the 1980 Act into the federal approach for addressing critical materials supply issues. OSTP further stated that agencies have in place mechanisms to monitor, evaluate, and report on the progress of their efforts in support of their missions, and the Subcommittee reports directly to its parent committee and in other ways (public documents) on its collective actions. However, as we state in the report, the Subcommittee has not reported periodically on the progress of all of its efforts to address critical materials supply issues, and there is no member agency that is responsible for reporting on all of the Subcommittee’s efforts. We continue to believe that OSTP should fully implement our three recommendations to enhance interagency collaboration on critical materials supply issues. OSTP neither agreed nor disagreed with our fourth recommendation that the Subcommittee should take the steps necessary to include potentially critical materials beyond minerals, such as developing a plan or strategy for prioritizing additional materials. In its comments, OSTP stated that plans to address additional materials are under discussion as the Subcommittee evaluates feedback on the published assessment methodology and that other approaches may be considered to add potentially critical materials that cannot be screened using the methodology because of data limitations or other factors. DOE, which co- chairs the Subcommittee along with OSTP and Interior, stated in its general comments that the report would more accurately present the issue of the federal focus on only a subset of materials by including a more comprehensive discussion of the data availability issues that limit the Subcommittee’s early warning screening methodology. We acknowledge that existing data limitations present a challenge for the Subcommittee. As we state in the report, our recommendation that the Subcommittee take steps such as developing a plan or strategy for prioritizing additional materials to be included in the early warning screening methodology is intended to help the Subcommittee better work with member agencies to address existing data limitations. In its general comments, DOE also suggested that we clarify that the plan or strategy for prioritizing additional materials should focus on those that require augmented data collection activities. As we state in our report, addressing data limitations is a key factor in the Subcommittee’s ability to apply its early warning screening methodology to additional materials. Therefore, we clarified in the recommendation the role of data limitations. Without taking steps to include potentially critical materials beyond minerals, such as developing a plan for prioritizing additional materials, the Subcommittee may miss opportunities to obtain the data it needs, such as by proposing a revision to the North American Industry Classification System. We continue to believe that the Subcommittee should implement our recommendation by taking such steps. In written comments, OSTP stated it concurred with our fifth recommendation that the Subcommittee should examine approaches other countries or regions are taking to see if there are any lessons learned that can be applied to the United States. OSTP stated that it looks forward to exploring the experiences and approaches of other countries and regions. In its general comments, DOE expressed concerns that our evaluation of the federal government’s approach to addressing critical materials supply issues is based largely on a nongeneralizable sample of critical materials experts and that it is not clear in the report that we considered how the composition of survey respondents could present significant bias in the results. DOE stated that a majority of the survey respondents fall under the ‘Industry/Association’ category and that representatives from industry could be expected to say that there is more the government can do to support domestic industries. As we state in the report, our survey results are not generalizable and only represent the views of those who responded. However, both the total number of experts from industry sampled (24) and the number of experts from industry that responded in the second round of the survey (19) represent about half of the experts we included in the survey. The remaining represent government experts (6 sampled and 5 who responded in the second round of the survey) and academic/nonprofit experts (16 sampled and 12 who responded in the second round of the survey). DOE’s statement assumes that all of the industry respondents think government should do more—which may or may not be true. There could also be bias if the respondents’ views differed from the views of nonrespondents. However, we do not know whether this is the case, and this type of bias can occur in any survey. Our findings are supported not only by our survey results, but also through our review of relevant documents and interviews with officials from government and industry in the United States and in other countries and regions. Therefore, we did not make any changes to the report as a result of DOE’s comment. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Director of the Office of Science and Technology Policy, the Secretary of Commerce, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or neumannj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This report (1) describes federal agencies’ activities related to the supply of critical materials; (2) describes the approaches of selected countries and regions to address critical materials supply issues; and (3) evaluates the federal government’s approach, such as coordination of activities, to addressing critical materials supply issues. For our first and third objectives, we reviewed laws, regulations, and guidance related to the supply of critical materials, such as the National Materials and Minerals Policy, Research and Development Act of 1980 (1980 Act) and a law related to the Department of Defense’s stockpiling of materials. We also collected and reviewed prior GAO reports on issues related to the federal effort to address the supply of critical materials, as well as congressional hearings, industry reports, and academic studies on the U.S. supply of critical materials. We also reviewed the charters of the Subcommittee on Critical and Strategic Mineral Supply Chains (Subcommittee), which is under the National Science and Technology Council’s Committee on Environment, Natural Resources, and Sustainability. To describe federal agencies’ activities related to the supply of critical materials, we contacted the 20 federal agencies and Executive Office of the President organizations that are designated as members of the Subcommittee. These agencies and organizations are the Departments of Agriculture, Commerce, Defense, Education, Energy, Homeland Security, the Interior, Justice, Labor, State, and the Treasury, as well as the Environmental Protection Agency, National Aeronautics and Space Administration, National Science Foundation, Council on Environmental Quality, National Economic Council, National Security Council, Office of Management and Budget, Office of Science and Technology Policy, and Office of the U.S. Trade Representative (OSTP). We interviewed and obtained reports and analyses from officials from those agencies as appropriate. We also interviewed officials from a federal agency that was not designated as a member of the Subcommittee—the Department of Health and Human Services’ National Institutes of Health—about its role in activities related to the supply of critical materials, as it relies on rare gases, for example, for research and medical applications. To describe the approaches of selected countries and regions to address critical materials supply issues, we interviewed officials across government, academia, and industry from the European Union (EU), Japan, and Canada and obtained relevant documentation from officials. We also met onsite with EU officials in Brussels, Belgium, and Japanese officials in Tokyo. While in the EU, we also met with German officials in Berlin and Bonn, to understand the impact of multinational planning on national laws and policies related to critical materials. We selected these countries and regions based on the efforts they have under way to address critical materials supply risks and our ability to collect information about those efforts. To evaluate the federal government’s approach to addressing critical materials supply issues, we developed and disseminated a two-stage, web-based survey to a nongeneralizable sample of 46 critical materials experts. The sample was selected with the goal of obtaining a balance of perspectives across the industrial, academic, and government sectors on the critical materials supply chain. We also identified subject matter areas relevant to the critical materials supply chain. Based on background research and interviews with experts, we identified the following relevant subject matter areas: Materials science—basic or applied research or experience related to materials that could be used in the production of advanced technologies, including methods for recycling materials. Industrial ecology—research or experience related to the flow of energy and materials through an industrial system, including, but not limited to, resource constraints and life cycle analysis. Mining and raw materials—research or experience related to extraction or processing of minerals or materials, including exploration and permitting for such activities. Markets and trade policy—research or experience related to commodity markets, supply and demand for materials, or trade policies that affect the flow of materials. Supply chain management—research or experience related to the management of an industry or government supply chain or the collection, dissemination, or analysis of information on material supply chains and the risk associated with them. Workforce issues—research or experience related to the adequacy of technically trained personnel in the fields of mining or material science. To identify experts from the industrial, academic, and government sectors who are knowledgeable about matters involving the critical materials supply chain, the team used resources that included professional and government publications; participant lists of knowledge-sharing events, such as workshops, symposia, and conferences; recent congressional testimonies related to critical materials issues; members of a federal advisory committee; and outreach to research and academic programs, trade associations, companies, and other industry groups. In addition, the team identified a number of potential experts based on interviews with federal agencies and other knowledgeable stakeholders conducted as part of the audit work for the engagement. We identified and reached out to more than 100 experts based on their expertise across the range of subject matter areas and sectors. Out of those experts we contacted, 49 expressed an interest in participating in the survey. In total, 47 experts (of 49 considered) were selected for participation in the survey. After the first round of the survey was sent out to all participants, one participant declined to participate and was removed from the list of participants resulting 46 experts. The makeup of the 46 experts consisted of 6 in government, 16 in academia and nonprofit organizations, and 24 in industry and trade group associations. Table 4 shows the breakdown of experts’ expertise across sectors. The first round of the survey was conducted from September 22, 2015, to October 30, 2015, and asked the experts to respond to five open-ended questions about the primary strengths and weaknesses of the U.S. federal government’s policies and activities related to critical materials and options for improving these efforts. Out of the 46 experts sampled for the survey, 33 responded to the survey, resulting in a response rate of 72 percent. The 33 who responded were experts who successfully submitted their conflict-of-interest forms and completed the electronic survey. After the experts completed the open-ended questions, we analyzed the responses to identify key issues raised by the experts. Based on those key issues raised by the experts, we identified topic categories related to the supply of critical materials. We then developed closed-ended questions for the second round of the survey in which we asked each expert to rate the ideas and other information that came from the first round of the survey. Two of the 33 respondents from the first round of the survey did not participate in the second round of the survey. The second round of the survey was conducted from February 3, 2016, to March 4, 2016, and contained 30 questions. The first 29 questions were closed-ended questions, with many containing follow-up questions to further explore experts’ responses. The last question was open-ended to capture experts’ views on issues that had not been previously covered in the survey. Out of the 46 experts sampled for the second round of the survey, 36 responded, resulting in a response rate of 78 percent. We conducted follow-up phone calls around mid-February 2016 to participants who had not completed the survey, had not turned in their conflict-of-interest forms, or both. The 36 who responded to the survey were those experts who successfully submitted their conflict-of-interest forms and completed the electronic survey. Five of the 36 respondents who participated in the second round of the survey had not participated in the first round of the survey. Because we selected a nongeneralizable sample of experts, their views are not generalizable to other experts in these subject matter areas, but their views can provide illustrative examples of critical materials supply issues. The quality of survey data can be affected by nonsampling error. Nonsampling error includes variations in how respondents interpret questions, respondents’ willingness to offer accurate responses, and data collection and processing errors. In developing the web survey, we pretested draft versions of the instrument in December 2015 with 5 experts who later participated in the second round of the survey. On the basis of the pretests, we made revisions to the survey. We included steps in developing the survey and collecting, editing, and analyzing survey data, to minimize such nonsampling error. Furthermore, using a web- based survey also helped remove errors in our data collection effort. Allowing experts to enter their responses directly into an electronic instrument automatically created a record for each expert in a data file and eliminated the errors associated with a manual data entry process. To determine the extent of collaboration among agencies that are members of the Subcommittee, we collected documents and interviewed officials in OSTP and other agencies that are Subcommittee members to obtain additional information on the federal approach, including efforts to coordinate federal activities. To evaluate the federal approach, including coordination, we compared federal efforts against the national policy outlined in the 1980 Act and key practices for interagency collaboration. We reviewed the eight key practices for interagency collaboration based on which of the practices were most relevant to the operations of the Subcommittee. The key practices for interagency collaboration are among the options for reducing or better managing fragmentation to improve the efficiency of federal programs and more effectively achieve their objectives. We identified all but one of the key practices (reinforce individual accountability for collaborative efforts through performance management systems) as relevant to the Subcommittee’s functions. We conducted this performance audit from March 2015 to September 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 5 provides information on the results of selected criticality assessments that have been conducted on a variety of materials that are important to U.S. economic and national security interests. In addition to the contact named above, Chris Murray (Assistant Director), Darnita Akers, Martin Campbell, Antoinette Capaccio, Mackenzie Doss, Lorraine Ettaro, Cheryl Harris, Holly Hobbs, Jill Lacey, Dan C. Royer, Tind Shepper Ryen, Jerome Sandau, Alexandra Stone, Vasiliki Theodoropoulos, and Reed Van Beveren made key contributions to this report. | Certain metals, minerals, and other “critical” raw materials play an important role in the production of advanced technologies across a range of industrial sectors and defense applications. Recently, concentration of the supply of some critical materials under foreign control has renewed questions about the U.S. government's and industry's ability to address potential supply disruptions. GAO was asked to examine U.S. efforts to identify and strategically plan for critical materials supply issues. Among other objectives, this report (1) describes federal agencies' activities related to the supply of critical materials and (2) evaluates the federal government's approach to addressing critical materials supply issues. GAO reviewed relevant laws, agency documents, and academic studies; interviewed federal officials; and conducted a two-stage web-based survey of a nongeneralizable sample of critical materials experts selected to cover a range of subject matter areas. Federal agencies are primarily focused on two areas of activity related to critical materials supply—assessing risk and supporting research. For example, the Department of Energy (DOE) has conducted two criticality assessments on materials important to clean energy applications and manages the Critical Materials Institute—a 5-year, $120 million investment aimed at mitigating risks by diversifying supply, providing alternatives to existing materials, and improving recycling and reuse. In addition, agencies conduct a range of other critical materials related activities, including stockpiling or producing materials, and reviewing and approving resource extraction projects, among other efforts. The federal approach to addressing critical materials supply has areas of strength but is not consistent with selected key practices for interagency collaboration and faces other limitations, as shown below. According to its charter, the Subcommittee on Critical and Strategic Mineral Supply Chains (Subcommittee)—co-chaired by the Office of Science and Technology Policy (OSTP), DOE, and the Department of the Interior—is to facilitate a strong, coordinated effort across its member agencies on critical materials activities. However, the Subcommittee's efforts have not been consistent with selected key practices for interagency collaboration, including agreeing on roles and responsibilities; establishing mutually reinforcing or joint strategies; and developing mechanisms to monitor, evaluate, and report on results. For example, some member agencies do not have a clear role in the Subcommittee's efforts and have had limited or no involvement in its work. By taking steps to actively engage all member agencies in its efforts and clearly define roles and responsibilities, the Subcommittee would have more reasonable assurance that it can effectively marshal the potential contributions of all member agencies to help identify and mitigate critical materials supply risks. Other limitations to the federal approach to addressing critical materials supply include limited engagement with industry and a limited focus on domestic production. For example, the Department of Commerce (Commerce) is required by law to identify and assess cases of materials needs. However, Commerce does not solicit information from stakeholders across a range of industrial sectors. As a result, Commerce may not have comprehensive, current information across a range of industrial sectors to help it identify and assess materials needs. GAO is making six recommendations, including that OSTP take steps to improve interagency collaboration by, for example, defining Subcommittee member roles and responsibilities and that Commerce engage with stakeholders to continually identify and assess critical materials needs across industrial sectors. Commerce agreed. OSTP agreed with one and neither agreed nor disagreed with the other four recommendations but discussed how roles and responsibilities are defined, among other things. GAO continues to believe these steps are needed, as discussed in the report. |
Since it started development in 2003, FCS was at the center of the Army’s efforts to modernize into a lighter, more agile, and more capable combat force. The FCS concept involved replacing existing combat systems with a family of manned and unmanned vehicles and systems linked by an advanced information network. The Army anticipated that the FCS systems, along with the soldier and enabling complementary systems, would work together in a system of systems wherein the whole provided greater capability than the sum of the individual parts. The Army expected to develop this equipment in 10 years, procure it over 13 years, and field it to 15 FCS-unique brigades—about one-third of the active force at that time. The Army also had planned to spin out selected FCS technologies and systems to current Army forces throughout the system development and demonstration phase. In 2006, the Army established the Army Evaluation Task Force to use, evaluate, and train with these FCS spinout capabilities. The Army used a management approach for FCS that centered on a lead system integrator (LSI) to provide significant management services to help the Army define and develop FCS and reach across traditional Army mission areas. Army officials have stated that they did not believe the Army had the resources or flexibility to use its traditional acquisition process to field a program as complex as FCS under the aggressive timeline established by the then-Army Chief of Staff. As we have reported in the past, the FCS program was immature and unable to meet DOD’s own standards for technology and design from the start (see the list of related GAO products at the end of this report). Although adjustments were made, such as adding time and reducing requirements, vehicle weights and software code grew, key network systems were delayed, and technologies took longer to mature than anticipated (see fig. 1). By 2009, after an investment of 6 years and an estimated $18 billion, the viability of the FCS concept was still unknown. As such, in our 2009 report, we concluded that the maturity of the development efforts was insufficient and the program could not be developed and produced within existing resources. In April 2009, the Secretary of Defense proposed a significant restructuring of the FCS program in order to address more near-term combat needs and incorporate a role for the Mine Resistant Ambush Protected (MRAP) vehicles being used in today’s conflicts. The Secretary noted significant concerns that the FCS program’s vehicle designs—where greater information awareness was expected to compensate for less armor and result in lower weight and higher fuel efficiency—did not adequately reflect the lessons of counterinsurgency and close-quarters combat operations in Iraq and Afghanistan. As such, the Secretary recommended accelerating fielding of ready-to-go systems and capabilities to all combat brigades; canceling the vehicle component of the FCS program, reevaluating the requirements, technology, and approach, and relaunching the Army’s vehicle modernization program; and addressing fee structure and other concerns with current FCS contracting arrangements. Subsequently, in June 2009, DOD issued an acquisition decision memorandum that canceled the FCS acquisition program, terminated manned ground vehicle development efforts, and laid out plans for follow- on Army brigade combat team modernization efforts. DOD directed the Army to transition to an Army-wide modernization plan consisting of a number of integrated acquisition programs, including one to develop ground combat vehicles (GCV). The memorandum also instructed the Army to transition away from an LSI management approach. In recent months, the Army has been defining its ground force modernization efforts per the Secretary’s decisions and the June 2009 acquisition decision memorandum. Although the details are not yet complete, the Army took several actions through the end of calendar year 2009. It stopped all development work on the FCS manned ground vehicles—including the non-line-of-sight cannon—in the summer of 2009 and recently terminated development of the Class IV unmanned aerial vehicle and the countermine and transport variants of the Multifunction Utility/Logistics and Equipment unmanned ground vehicle. For the time being, the Army is continuing selected development work under the existing FCS development contract, primarily residual FCS system and network development. In October 2009, the Army negotiated a modification to the existing contract that clarified the development work needed for the brigade combat team modernization efforts. The Army is implementing DOD direction and redefining its overall modernization strategy as a result of the Secretary of Defense’s decision to significantly restructure the FCS program. It established a key task force to refine its future force concepts and modernization plans and has moved away from FCS as the centerpiece of ground force modernization. Additionally, the Army is transitioning from the FCS long-term acquisition orientation to a shorter-term approach that biennially develops and fields new increments of capability within capability packages. It now has one approved acquisition program that will produce and field the initial increment of the FCS spinout equipment, as well as preliminary plans for two other acquisition programs that will define and develop follow-on increments and develop a new GCV. The Army also plans to continue network development for all the combat brigades and to develop and field upgrades to other existing equipment. In response to the Secretary’s recommendation to restructure FCS, the Army established a Training and Doctrine Command-based task force to reexamine current force capability gaps, make resource-informed recommendations on how to fill them, and provide elements of planning for future force modernization. Through that process, the task force found that some assumptions were no longer valid, such as reliance on networking for survivability, which essentially meant trading heavy armor for better information or situational awareness. The Army acknowledges that this is not the best trade for the way it now has to fight. As a result of the task force’s analysis, the Army is implementing a new operational concept and brigade combat team modernization strategy that will update all Army combat brigades for full-spectrum operations. That is a significant contrast to the FCS approach that would have created 15 new FCS-unique brigades. The task force developed a concept of continual modernization of ready- to-go capabilities through biennial deliveries of capability packages. In addition to select FCS systems, these capability packages could also include materiel and nonmateriel items developed outside the FCS program. The concept also included plans to reallocate assets, divest older technologies, and incrementally modernize the Army’s information network. The Army expects to field the first capability package in fiscal years 2011 through 2012, followed by additional capability packages delivered in 2-year increments. The Army plans to align capability package fielding with an established equipment reset and training process in order to provide these systems to deploying units. A network effort, to include more advanced hardware, software, and radios, will be included in each capability package. The Army’s near-term plan is to define, develop, produce, and field capabilities to some of the Army’s combat brigades, and the long-term plan is to field those capabilities to all remaining combat brigades. The Army has specified that the new capabilities will be tested and their performance validated before they are deployed in the capability packages. In recent months, the Army has been defining its ground force modernization efforts per the Secretary’s decisions and the specifics of the June 2009 acquisition decision memorandum. The Army has one approved acquisition program as well as preliminary plans for starting two other acquisition programs, integrating network capabilities across the Army’s combat brigade structure, and upgrading and fielding existing ground force capabilities. The first program, Increment 1, is a continuation of previous FCS- related efforts to spin out emerging capabilities and technologies to current forces. Of the Army’s post-FCS modernization initiatives, Increment 1, which includes such FCS remnants as unmanned air and ground systems, unattended ground sensors, the non-line-of-sight launch system, and a network integration kit, is the furthest along in the acquisition development cycle (see fig. 2). The network integration kit includes, among other things, the integrated computer system, an initial version of the system-of-systems common operating environment, early models of the Joint Tactical Radio System and waveforms, and a range extension relay. In December 2009, the Army requested and DOD approved, with a number of restrictions, the low- rate initial production of Increment 1 systems that are expected to be fielded in the fiscal year 2011-12 capability package, which will be discussed in more detail later in this report. The Army will be continuing Increment 1 development over the next 2 years while low- rate initial production proceeds. The projected development and production cost to equip nine combat brigades with the Increment 1 network and systems, supported by an independent cost estimate, would be about $3.5 billion. Provides enhanced situational awareness and force protection through reduced exposure to hazards during soldier-intensive and/or high-risk functions. Provides enhanced communications and situational awareness through radios with multiple software waveforms, connections to unattended sensors, and links to existing networking capabilities. Provides force protection in an urban setting through a leave- behind, network-enabled reporting system of movement and/or activity in cleared areas. Provides independent, soldier-level aerial reconnaissance, surveillance, and target acquisition capability. Provides the ability to precisely attack armored, lightly armored, and stationary or moving targets at extended ranges despite weather/environmental conditions and/or presence of countermeasures. Provides enhanced situational awareness, force protection, and early warnings in a tactical setting through cross-cues to sensors and weapon systems. For the second acquisition program, Increment 2 of brigade combat team modernization, the Army has preliminary plans to mature Increment 1 capabilities—potentially demonstrating full FCS threshold requirements—as well as contribute to further developments of the system-of-systems common operating environment and battle command software, and demonstrate and field additional capabilities. For example, these may include the Armed Robotic Vehicle Assault (Light)—an unmanned ground vehicle configured for security and assault support missions—and the Common Controller, which will provide the dismounted soldier a handheld device capable of controlling, connecting, and providing data transfer from unmanned vehicles and ground sensors. According to Army officials, they are currently working to define the content, cost, and schedule for Increment 2 and are planning a Defense Acquisition Board review in the third quarter of fiscal year 2010 and a low-rate initial production decision for fiscal year 2013. The third acquisition program would develop a new GCV. The Army reviewed current fighting vehicles across the force structure to determine whether to sustain, improve, divest, or pursue new vehicles based on operational value, capability shortfalls, and resource availability. Per DOD direction, the Army also collaborated with the Marine Corps to identify capability gaps related to fighting vehicles. For development of a new GCV, the Army’s preliminary plans indicate the use of an open architecture design to enable incremental improvements in modular armor; network architecture; and subcomponent size, weight, power, and cooling. Preliminary funding and schedule information for the proposed program was recently provided to the defense committees by way of the Fiscal Year 2011 President’s Budget Request. According to a DOD official, in February 2010, DOD made a materiel development decision for the Army’s proposed GCV effort. As a result of that decision, DOD authorized the Army’s release of a request for proposals for GCV technology development. Over the next several months, the Army will be conducting an analysis of alternatives to assess potential materiel solutions for the GCV. The Army expects to follow the analysis with a Milestone A decision review on whether to begin technology development in September 2010. After Milestone A, Army officials are proposing the use of competitive prototyping with multiple contractors—the number of which will depend on available funding— during the technology development phase, which will feature the use of mature technologies and the fabrication and testing of prototype subsystems. A preliminary design review would be used to validate contractor readiness to enter detailed design at Milestone B in fiscal year 2013. The Army’s preliminary plans indicate that the first production vehicles could be delivered in late fiscal year 2017, about 7 years from Milestone A. The Army is planning to incrementally develop and field an information network to all of its combat brigades in a decentralized fashion—that is, not as a separate acquisition program. The Army has defined a preliminary network strategy and is in the process of defining what the end state of the network will need to be, as well as how it may build up that network over an undefined period of time. In the near term, the Army is working to establish a common network foundation to build on and to define a common network architecture based on what is currently available and expected to become available in the near future. Current communications, command and control, and networking acquisition programs will continue and will be expected to build on to the current network foundation and architecture over time. Networking capabilities will be expected to meet specific standards and interface requirements. According to Army officials, the ongoing incremental network and software development activities and requirements will be dispersed to these acquisition programs, where they will be considered for further development and possible fielding. The only original FCS network development activities that the Army plans to continue under the FCS development contract are those supporting the network integration kit for Increment 1 and whatever additional networking capabilities may be needed for Increment 2. DOD expects the Army to present network development plans in March 2010. The Army has also outlined plans to upgrade existing ground force capabilities and integrate the MRAP vehicle into its forces. The plans include upgrades to the Abrams tank fleet, Paladin cannon, and Stryker vehicles. They also include a role for MRAP vehicles within the brigade combat team structure, in accordance with the Secretary of Defense’s April 2009 statement that the Army’s vehicle program developed 9 years ago did not include a role for the $25 billion investment in MRAP being used to “good effect” in today’s conflicts. Using the recommendations from the task force, the Army drafted plans to fully integrate MRAP vehicles into 20 combat brigades. The challenge facing both DOD and the Army is to set these ground force modernization efforts on the best footing possible by buying the right capabilities at the best value. In many ways, DOD and the Army have set modernization efforts on a positive course by following direction from DOD leadership, and they have an opportunity to reduce risks by adhering to the body of acquisition legislation and policy reforms—which incorporate knowledge-based best practices we identified in our previous work—that have been introduced since FCS started in 2003. The new legislation and policy reforms emphasize a knowledge-based acquisition approach, a cumulative process in which certain knowledge is acquired by key decision points before proceeding. In essence, knowledge supplants risk over time. Additionally, DOD and the Army can further reduce risks by considering lessons learned from problems that emerged during the FCS development effort. Initial indications are that the Army is moving in that direction. These lessons span knowledge-based acquisition practices, incremental development, affordability, contract management, and oversight. However, in the first major acquisition decision for the Army’s post-FCS initiatives, DOD and the Army—because they want to support the warfighter quickly—are proceeding with low-rate initial production of one brigade set of Increment 1 systems despite having acknowledged that the systems are immature, are unreliable, and cannot perform as required. DOD’s body of acquisition policy, which includes reforms introduced since FCS started development in 2003, incorporates nearly all of the knowledge-based practices we identified in our previous work (see table 1). For example, it includes controls to ensure that programs have demonstrated a certain level of technology maturity, design stability, and production maturity before proceeding into the next phase of the acquisition process. As such, if the Army proceeds with preliminary plans for new acquisition programs, then adherence to the acquisition direction in each of its new acquisition efforts provides an opportunity to improve the odds for successful outcomes, reduce risks for follow-on Army ground force modernization efforts, and deliver needed equipment more quickly and at lower costs. Conversely, acquisition efforts that proceed with less technology, design, and manufacturing knowledge than best practices suggest face a higher risk of cost increases and schedule delays. As shown above, the cumulative building of knowledge consists of information that should be gathered at three critical points over the course of a program: Knowledge point 1 (at the program launch or Milestone B decision): Establishing a business case that balances requirements with resources. At this point, a match must be made between the customer’s needs and the developer’s available resources—technology, engineering, knowledge, time, and funding. A high level of technology maturity, demonstrated via a prototype in its intended environment, indicates whether resources and requirements match. Also, the developer completes a preliminary design of the product that shows that the design is feasible and that requirements are predictable and doable. FCS did not satisfy this criterion when it began in 2003, and by 2009, 6 years into development, the Army still had not satisfied this criterion as emerging designs did not meet requirements, critical technologies were immature, and cost estimates were not realistic. Knowledge point 2 (at the critical design review between design integration and demonstration): Gaining design knowledge and reducing integration risk. At this point, the product design is stable because it has been demonstrated to meet the customer’s requirements as well as cost, schedule, and reliability targets. The best practice is to achieve design stability at the system-level critical design review, usually held midway through system development. Completion of at least 90 percent of engineering drawings at this point provides tangible evidence that the product’s design is stable, and a prototype demonstration shows that the design is capable of meeting performance requirements. Knowledge point 3 (at production commitment or the Milestone C decision): Achieving predictable production. This point is achieved when it has been demonstrated that the developer can manufacture the product within cost, schedule, and quality targets. The best practice is to ensure that all critical manufacturing processes are in statistical control— that is, they are repeatable, sustainable, and capable of consistently producing parts within the product’s quality tolerances and standards—at the start of production. In recent years, a number of specific changes have been made to DOD acquisition policies. Further policy changes should be incorporated as a result of the Weapon System Acquisition Reform Act of 2009. These changes, if implemented properly, allow programs to achieve knowledge at the right times by ensuring that any critical technologies to be included in the weapon system are mature and ready for integration. The changes provide support to program managers to keep requirements reasonable and to keep changes at a minimum. The prototyping provisions included in these changes call for developmental prototypes beginning very early in the program. With FCS, the Army did not follow knowledge-based acquisition practices, but reforms introduced since FCS’s start in 2003 incorporate nearly all of the knowledge-based practices we identified in our previous work. For example, the reforms include controls to ensure that programs have demonstrated a certain level of technology maturity, design stability, and production maturity before they proceed to the next phase of the acquisition process. If the Army adheres to these acquisition practices, it has an opportunity to increase the likelihood of successful outcomes for follow-on Army ground force modernization efforts. Conversely, acquisition efforts that deviate from knowledge-based practices face a higher risk of cost increases and schedule delays. Table 2 lists some of those acquisition reforms and their potential impact. There are initial indications that DOD and the Army are moving forward to implement the acquisition policy reforms as they proceed with ground force modernization, including the Secretary of Defense’s statement about the ground vehicle modernization program—to “get the acquisition right, even at the cost of delay.” In addition, DOD anticipates that the GCV program will comply with DOD acquisition policy in terms of utilizing competitive system or subsystem prototypes. According to a DOD official, DOD made a materiel development decision for the GCV in February 2010, and the Army is proposing to conduct a preliminary design review on GCV before Milestone B. Additionally, a configuration steering board is planned in 2010 to address reliability and military utility of infantry brigade systems. The Army has the opportunity to reduce risks by incorporating lessons learned from the FCS development effort. These key lessons span several areas: knowledge-based acquisition principles, incremental development, affordability, contract management, oversight, and incentive fee structure. Considering these lessons give the Army an opportunity to reduce risks by utilizing the things that worked well on the FCS program, while avoiding the acquisition pitfalls that plagued the program. Lesson: The Army did not position the FCS program for success because it did not establish a knowledge-based acquisition approach—a strategy consistent with DOD policy and best acquisition practices—to develop FCS. The Army started the FCS program in 2003 before defining what the systems were going to be required to do and how they were going to interact. It moved ahead without determining whether the FCS concept could be developed in accordance with a sound business case. Specifically, at the FCS program’s start, the Army had not established firm system-level requirements, mature technologies, a realistic cost estimate, or an acquisition strategy wherein knowledge drives schedule. By 2009, the Army still had not shown that emerging FCS system designs could meet requirements, that critical technologies were at minimally acceptable maturity levels, and that the acquisition strategy was executable within estimated resources. Actions being taken: In the first major acquisition decision for the Army’s post-FCS initiatives, DOD and the Army—because they want to support the warfighter quickly—are proceeding with low-rate initial production of Increment 1 systems despite having acknowledged that systems are immature, are unreliable, and cannot perform as required. In December 2009, the Under Secretary of Defense for Acquisition, Technology and Logistics approved low-rate initial production of Increment 1 equipment for one infantry brigade but noted that there is an aggressive risk reduction plan to grow and demonstrate the network maturity and reliability to support continued Increment 1 production and fielding. In the associated acquisition decision memorandum, the Under Secretary acknowledged the risks of pursuing Increment 1 production, including early network immaturity; lack of a clear operational perspective of the early network’s value; and large reliability shortfalls of the network, systems, and sensors. The Under Secretary also said that he was aware of the importance of fielding systems to the current warfighter and that the flexibility to deploy components as available would allow DOD to “best support” the Secretary of Defense’s direction to “win the wars we are in.” Because of that, the Under Secretary specified that a number of actions be taken over the next year or more and directed the Army to work toward having all components for the program fielded as soon as possible and to deploy components of the program as they are ready. However, the Under Secretary did not specify the necessary improvements that the Army needed to make or that those improvements are a prerequisite for approving additional production lots of Increment 1. The approval for low-rate initial production is at variance with DOD policy and Army expectations. DOD’s current acquisition policy requires that systems be demonstrated in their intended environments using the selected production-representative articles before the production decision occurs. However, the testing that formed the basis for the Increment 1 production decision included surrogates and non-production- representative systems, including the communications radios. As we have previously noted, testing with surrogates and non-production- representative systems is problematic because it does not conclusively show how well the systems can address current force capability gaps. Furthermore, Increment 1 systems—which are slated for a fiscal year 2011-12 fielding—do not yet meet the Army’s expectations that new capabilities would be tested and their performance validated before they are deployed in a capability package. As noted in 2009 test results, system performance and reliability during testing was marginal at best. For example, the demonstrated reliability of the Class I unmanned aerial vehicle was about 5 hours between failure, compared to a requirement for 23 hours between failure. The Army asserts that Increment 1 systems’ maturity will improve rapidly but admits that it will be a “steep climb” and not a low-risk effort. While the Under Secretary took current warfighter needs into account in his decision to approve Increment 1 low-rate initial production, it is questionable whether the equipment can meet one of the main principles underpinning knowledge-based acquisition—whether the warfighter needs can best be met with the chosen concept. Test reports from late 2009 showed conclusively that the systems had limited performance, and that this reduced the test unit’s ability to assess and refine tactics, techniques, and procedures associated with employment of the equipment. The Director, Operational Test and Evaluation, recently reported that none of the Increment 1 systems have demonstrated an adequate level of performance to be fielded to units and employed in combat. Specifically, the report noted that reliability is poor and falls short of the level expected of an acquisition system at this stage of development. Shortfalls in meeting reliability requirements may adversely affect Increment 1’s overall operational effectiveness and suitability and may increase life cycle costs. In addition, in its 2009 assessment of the increment’s limited user test—the last test before the production decision was made—the Army’s Test and Evaluation Command indicated that the Increment 1 systems would be challenged to meet warfighter needs. The Evaluation Command concluded that, with the exception of the non-line-of-sight launch system, which had not yet undergone flight testing, all the systems were considered operationally effective and survivable, but with limitations, because they were immature and had entered the test as pre-production representative systems, pre-engineering design models, or both. Additionally, the command noted that these same systems were not operationally suitable because they did not meet required reliability expectations. Lesson: The FCS concept depended heavily on the network to link people, platforms, weapons, and sensors together within the 15 FCS brigades and to help eliminate the “fog of war.” There were significant risks associated with network development, including those related to performance and scalability, architecture, and tests of network performance being performed only after designs for vehicles carrying the network equipment already were set. The network never matured to show that it could deliver expected performance and reliability. Six years into network development efforts, it was still not clear whether the network could be developed, built, and demonstrated as planned. Actions being taken: Under the Army’s revised concept, rather than build a new network all at once and field it only to the unique FCS brigades, the Army’s intent is to develop and field an information network across the Army, building on current communications networks. Full details of the Army’s network strategy are still being developed, including the desired end state, incremental steps to that end state, and its costs. However, the Army anticipates that the new network will be bounded by available funding as well as technology readiness. It also expects, as with capability packages, to field network capability sets on a biennial basis. Network capability sets feature multiple pieces of the network that have been integrated and demonstrated. Near-term goals for the network include starting to connect the individual soldiers, expanding situational awareness to the company level, and expanding interoperability. As the Army envisions the network strategy, it will leverage network investments in systems already procured for ongoing wars, build upon a core set of network-related foundation products, and develop network packages that can be customized in support of current and future force platforms. These packages will include software, computers, and radios. Lesson: The affordability of FCS was always in doubt and, in the end, was a contributing factor to the decision to cancel the program. Ultimately, FCS affordability depended on two factors: the actual cost of the program and the availability of funds. The Army could not provide confident cost estimates for the actual costs of FCS because of the low levels of knowledge within the program. Instead, it indicated a willingness to accept the program’s high risks and make trade-offs in requirements for FCS and other programs to accommodate FCS’s growing costs. When the Army’s predicted costs for FCS rose from $92 billion in 2003 to $159 billion by 2009, the Army indicated that it would defer upgrades to current force systems, such as the Abrams Tank and Bradley Fighting Vehicle, to free up funds for FCS. In the end, the competition for funds—within the Army, among Army programs and other DOD programs, and among DOD programs and other federal government needs—was a factor in the decision to end the FCS program. According to a September 2009 letter from the Under Secretary of Defense for Acquisition, Technology and Logistics, the FCS acquisition could not be developed and produced within existing resources. Additionally, the Under Secretary noted that based on an evaluation of the overall priorities for Army modernization, developing and procuring FCS brigades was not fiscally viable given DOD priorities. Action being taken: The Army has not yet fully defined major predictors—content, pace, and costs—for long-term affordability of ground force modernization efforts. It has indicated that work is ongoing to develop priorities and resource plans for fiscal years 2011 through 2015, including fielding capability packages, incrementally improving the network, and establishing a new GCV program. The Army has also indicated that funding will drive capability trades. For example, the content and quantity of capability packages could be decreased or increased depending on available funding. Additionally, the Director of Cost Analysis and Program Evaluation prepared an independent cost assessment for Increment 1. This independent estimate was very close to the Army’s cost position for Increment 1 development and production. In its fiscal year 2011 budget request, the Army asked the Congress to approve funding for further Increment 1 development and production, Increment 2 development, GCV development, and some network development. As we have noted, at this time, detailed plans for these efforts are still being developed and may not be available until at least later in fiscal year 2010 as those plans are solidified and approved. Lesson: In 2003, the Army contracted with an LSI for FCS because of the program’s ambitious goals and the Army’s belief that it did not have the capacity to manage the program. The Army did not have the expertise to develop the FCS information network or enough people to support the program had it been organized into separate program offices. Through its relationship with the LSI, the Army believed that it found a partner that could help to define and develop FCS and reach across the Army’s organizations. In our 2007 report, we pointed out that the close partnerlike relationship between the Army and the LSI posed risks to the Army’s effective management and oversight of the FCS program. As a result, the June 2009 acquisition decision memorandum that outlined plans to cancel the FCS program also articulated a desire to move away from industry-led integration activities. Action being taken: While Army officials have acknowledged the Under Secretary’s direction to transition away from reliance on the LSI and affirmed their desire to comply with that direction, the transition will not happen right away. The Army is beginning a deliberate process to transition system engineering and integration activities from the LSI to the government. For example, Army officials stated that the Army will be contracting with the LSI for the procurement of the first three brigade sets of Increment 1 equipment. When these systems move into full-rate production, the Army may be in a better position to contract directly with the original equipment manufacturers and without the assistance of an LSI. According to the Army, the development of Increment 2 may be jointly managed by the LSI and the original equipment manufacturers. Likewise, the first lot of Increment 2 production may be jointly managed by the LSI and the original equipment manufacturers; the other production lots may be managed directly by the original equipment manufacturers. In September 2009, the Army established the Program Executive Office for Integration to oversee coordination of the three separate but integrated programs and the network development. Roles and responsibilities have not yet been fully defined. According to Army officials, the office will be modeling the various brigade architectures and infrastructures to better understand how they currently function and to facilitate adding capabilities to the brigades. They also expect the office to work with the individual acquisition programs to ensure that the programs are properly integrated with other elements of each capability package and equipment already fielded in the various brigades. As the integration issues are addressed, the individual acquisition programs will be responsible for execution. Additionally, the office will perform system engineering and integration via in-house capabilities and supplemented by federally funded research and development centers or contractors for the capability packages. The Army is also establishing an organization above the program executive office level to integrate ongoing network acquisition efforts to better capture new network technologies, expand technologies in the field so that they work better together, and provide better networking capability to more units. One way that the Army will be doing this is through establishing network standards and interface requirements. Lesson: DOD largely accepted the FCS program and its changes as defined by the Army, even though it varied widely with the best practices embodied in DOD’s own acquisition policies. Until late in the FCS program, DOD passed on opportunities to hold the FCS program accountable to more knowledge-based acquisition principles. Despite the fact that the program did not meet the requisite criteria for starting an acquisition program, DOD approved the program’s entrance into system development and demonstration in 2003. DOD later reevaluated the decision and decided to hold a follow-on review with a list of action items the program had to complete in order to continue. However, this review never occurred, and the FCS program continued as originally planned. In addition, DOD allowed the Army to use its own cost estimates rather than independent—and often higher—cost estimates when submitting annual budget requests. Action being taken: DOD appears to be more resolute in some of its oversight responsibilities for the emerging post-FCS efforts. For instance, at an October 2009 DOD review, the Army offered preliminary plans for post-FCS efforts. While DOD agreed to schedule an Increment 1 production decision and a GCV materiel development decision, DOD also noted that additional clarity was needed for development and procurement of follow-on items beyond Increment 1, as well as for transition of the integration activities from the current FCS contractors to the Army. DOD noted in its decision memorandum that it requires these plans before it will approve any acquisition strategy for modernization activities other than Increment 1 and GCV development. Additionally, while DOD did not hold the Army accountable to knowledge-based principles when it approved Increment 1 for low-rate production, DOD did limit low-rate initial procurement quantities to one brigade’s worth of equipment. DOD also required the Army to prepare for two additional reviews in 2010—one review to provide a status report on non-line-of-sight launch system testing and a report detailing the network maturity plan for Increment 1, and another review for examining the results of additional testing performed on Increment 1 systems. Additionally, DOD required the Army to fund Increment 1 acquisition efforts to the cost estimate prepared by the Director, Cost Assessment and Program Evaluation. Lesson: In the near future, the Army will likely be awarding development contracts for the emerging post-FCS programs. As we noted in 2005, DOD award fees do not always link to acquisition outcomes. Additionally, prior defense acquisition contracts, including the FCS contract, have complied with preferred DOD guidance for structuring incentive and award fees. In 2007, we reported that the Army’s contract with the FCS LSI contained fee provisions that did not tie fees to demonstrated performance, and it rewarded the LSI too early in the development Specifically, we reported that the Army would be paying 80 process. percent of the total incentive fee before the LSI conducted the critical design review. We viewed this arrangement as risky because most of a program’s cost growth occurs after the critical design review. Action being taken: In April 2009, when the Secretary of Defense announced his plans to significantly change the FCS program, he noted that he was troubled by the terms of the contract, particularly in its very unattractive fee structure that gives the government little leverage to promote cost efficiency. Previously, in an April 2008 memorandum, DOD stated that a more typical fee arrangement would be significantly less than what the Army featured in the FCS contract, and that fees should be based on demonstrated performance to the government. In September 2009, DOD issued another memorandum to the military services, instructing the acquisition officials to (1) be more consistent in applying the department’s guidance, (2) be more judicious in their reviews of fees to ensure that they are tied to demonstrated performance, and (3) collect additional fee data. These two memorandums indicate that the department appears focused on achieving more disciplined award and incentive fee practices. In addition, DOD officials have recently stated that they expect future Army contracts for ground force modernization to incorporate a fee structure with a “more classic and reasonable” form, in accordance with the Secretary’s direction and the September 2009 memorandum. In October 2009, the Army negotiated a contract modification for additional development of Increment 1 systems. The Army will soon be contracting for the procurement of those systems. Later, the Army will be awarding contracts for GCV development. At this point, it is unclear how and to what extent the Army will be applying the new fee guidance. Army and DOD officials made a very difficult decision when they canceled what was the centerpiece of Army modernization—the FCS program. As they transition away from the FCS concept, both the Army and DOD have an opportunity to improve the likely outcomes for the Army’s ground force modernization initiatives by adhering closely to recently enacted acquisition reforms and by seeking to avoid the numerous acquisition pitfalls that plagued FCS. As DOD and the Army proceed, they should keep in mind the Secretary of Defense’s admonition about the new ground vehicle modernization program: “get the acquisition right, even at the cost of delay.” Based on the preliminary plans, we see a number of good features. For example, we applaud the Army’s decision to pursue an incremental acquisition approach for its post-FCS efforts. However, it is vitally important that each of those incremental efforts adheres to knowledge-based acquisition principles and strikes a balance between what is needed, how fast it can be fielded, and how much it will cost. Moreover, the acquisition community needs to be held accountable for expected results, and DOD and the Army must not be willing to accept whatever results are delivered regardless of military utility. We are concerned that in their desire for speedy delivery of emerging equipment to our warfighters in the field, DOD and the Army did not strike the right balance in prematurely approving low-rate initial production of Increment 1 of brigade combat team modernization. Although the Army will argue that it needs to field these capabilities as soon as possible, none of these systems has been designated as urgent and it is not helpful to provide early capabilities to the warfighter if those capabilities are not technically mature and reliable. If the Army moves forward too fast with immature Increment 1 designs, this could cause additional delays as the Army and its contractors concurrently address technology, design, and production issues. Production and fielding is not the appropriate phase of acquisition to be working on such basic design issues. While the Army has not yet finalized its plans for its post-FCS initiatives, one thing is certain—these programs are likely to require significant financial investments. In its fiscal year 2011 budget request, the Army has asked the Congress to approve funding for Increment 1 development and production, Increment 2 development, GCV development, and some network development. At this time, detailed plans for these efforts are still being developed and were not yet available as of early January 2010. This means that the Congress will have limited information on which to base its funding decisions. The Army’s fiscal year 2011 budget request does not provide sufficient details to allay all concerns. DOD and the Army need to clearly define and communicate plans in order to ensure broad agreement among all stakeholders, including the Congress. It appears that the Army’s plans may not be solidified until well beyond the point when the congressional defense committees will have marked up the fiscal 2011 defense authorization bill. In order to ensure that only technically mature and reliable capabilities are fielded to the warfighters, we recommend that the Secretary of Defense mandate that the Army correct the identified maturity and reliability issues with the Increment 1 network and systems prior to approving any additional lots of the Increment 1 network and systems for production. Specifically, the Army should ensure that the network and the individual systems have been independently assessed as fully mature, meet reliability goals, and have been demonstrated to perform as expected using production-representative prototypes. We also recommend that the Secretary of the Army not field the Increment 1 network or any of the Increment 1 systems until the identified maturity and reliability issues have been corrected. In order to enhance congressional visibility into the Army’s plans in this area, we also recommend that the Secretary of Defense direct the Army to submit a comprehensive report to the Congress before the end of fiscal year 2010 on its ground force modernization investment, contracting, and management strategies. DOD concurred with, and provided comments to, all our recommendations. Regarding our recommendation to correct Increment 1 maturity and reliability issues prior to approving additional production, DOD stated that the need to correct those issues has been communicated to the Army. DOD also asserts that all Increment 1 systems will be tested in their production configuration, and performance will be independently assessed against capability requirements prior to approving production of any additional lots of Increment 1 systems. DOD’s comments concisely summarize the instructions that the Under Secretary of Defense for Acquisition, Technology and Logistics included in his December 2009 acquisition decision memorandum that approved low-rate initial production for the first brigade’s worth of infantry brigade combat team systems. The memorandum includes a number of sensible provisions, such as (1) an aggressive risk reduction plan to grow and demonstrate network maturity and reliability, (2) monthly reporting requirements for network and system reliability improvements, (3) a comprehensive precision mix analysis to demonstrate the cost-effectiveness of the non-line-of-sight launch system, (4) the use of a configuration steering board to examine reliability and military utility, and (5) a plan to compare the effectiveness of operational units with and without the Increment 1 systems and network. However, neither the memorandum nor DOD’s comments to this report indicated the minimally acceptable standards that must be met in order to proceed with additional procurement lots of the Increment 1 systems and network. The Army has many Increment 1 development and testing activities planned for the coming months and we intend to monitor their progress closely. Regarding our recommendation that the Army not field the Increment 1 systems and network until maturity and reliability issues had been corrected, DOD stated that Increment 1 systems would not be fielded until performance is sufficient to satisfy the warfighter’s capability requirements. We believe it will be vitally important that (1) Increment 1 systems and network clearly demonstrate their ability to fully satisfy the needs of the warfighter and (2) DOD and the Army not be willing to accept whatever acquisition results are delivered regardless of their military utility. Again, we intend to follow the Army and DOD’s activities and actions in the coming months. Regarding our recommendation to submit a comprehensive report to the Congress on Army ground force modernization investment, contracting, and management strategies, DOD stated that the Army will provide its annual Army Modernization Strategy no later than the third quarter of fiscal year 2010. According to DOD, this strategy document, in conjunction with the 2010 Army Weapons Systems Handbook and the 2011 budget request material, provides the Army’s investment, contracting, and management strategies for ground force modernization. In making this recommendation, we felt that the Army had made significant changes in its investment, contracting, and management strategies as it moved away from the FCS program. We felt that a comprehensive report on its new strategies for ground force modernization would be enlightening to the Congress. In the coming months, we will review the materials promised by the Army to determine if they provide adequate knowledge to the Congress. DOD’s comments are reprinted in appendix II. We received other technical comments from DOD, which have been addressed in the report. We are sending copies of this report to the Secretary of Defense; the Secretary of the Army; and the Director, Office of Management and Budget. This report also is available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-4841 or sullivanm@gao.gov if you or your staff have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. The major contributors are listed in appendix III. To outline the Army’s preliminary post–Future Combat System (FCS) plans, we obtained and reviewed proposed plans for the Army’s new modernization approach. We compared those plans against the FCS operational concept and acquisition approach. We interviewed officials responsible for carrying out the FCS cancellation, including officials from the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics and the Program Executive Office for Integration (formerly the FCS Program Office). We also met with officials responsible for reexamining current-force capability gaps and formulating the new operational concept, including officials from the Army’s Training and Doctrine Command, the Future Force Integration Directorate, and the Army Evaluation Task Force. To identify the challenges and opportunities the Department of Defense (DOD) and the Army will need to address as they proceed with Army ground force modernization efforts, we reviewed relevant Army and DOD documents, including the Secretary of Defense’s April 6, 2009, announcement on restructuring FCS and the June 23, 2009, acquisition decision memorandum that implemented the Secretary’s proposed restructure; the Army Capstone Concept; the Director, Operational Test and Evaluation’s Fiscal Year 2009 Annual Report; the Comprehensive Lessons Learned White Paper; and the Army Modernization White Paper. Additionally, we reviewed recent acquisition reforms, including DOD Instruction 5000.02, Operation of the Defense Acquisition System; the Weapon Systems Acquisition Reform Act of 2009 (Public Law No. 111-23); and other legislative initiatives. In developing lessons learned from the FCS program, we reviewed current Army ground force modernization plans and assessed them against FCS approaches and outcomes, best practices, and the latest acquisition policies and reforms. In our assessment of the Army’s modernization approach, we used the knowledge-based acquisition practices drawn from our body of past work as well as DOD’s acquisition policy and the experiences of other programs. We interviewed officials responsible for providing independent assessments of technologies, testing, networking, and systems engineering. This included officials from the Office of the Secretary of Defense’s Cost Assessment and Program Evaluation Office; Office of the Director, Defense Research and Engineering; Office of the Assistant Secretary of Defense (Networks and Information Integration); and Office of the Director, Operational Test and Evaluation. We discussed the issues presented in this report with officials from the Army and the Secretary of Defense and made changes as appropriate. We conducted this performance audit from March 2009 to March 2010 in accordance with generally accepted auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, the following staff members made key contributions to the report: William R. Graveline, Assistant Director; William C. Allbritton; Noah B. Bleicher; Helena Brink; Tana M. Davis; Marcus C. Ferguson; and Robert S. Swierczek. Defense Acquisitions: Issues to be Considered for Army’s Modernization of Combat Systems. GAO-09-793T. Washington, D.C.: June 16, 2009. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-09-326SP. Washington, D.C.: March 30, 2009. Defense Acquisitions: Key Considerations for Planning Future Army Combat Systems. GAO-09-410T. Washington, D.C.: March 26, 2009. Defense Acquisitions: Decisions Needed to Shape Army’s Combat Systems for the Future. GAO-09-288. Washington, D.C.: March 12, 2009. Defense Acquisitions: 2009 Review of Future Combat System Is Critical to Program’s Direction. GAO-08-638T. Washington, D.C.: April 10, 2008. Defense Acquisitions: 2009 Is a Critical Juncture for the Army’s Future Combat System. GAO-08-408. Washington, D.C.: March 7, 2008. Defense Acquisitions: Future Combat System Risks Underscore the Importance of Oversight. GAO-07-672T. Washington, D.C.: March 27, 2007. Defense Acquisitions: Key Decisions to Be Made on Future Combat System. GAO-07-376. Washington, D.C.: March 15, 2007. Defense Acquisitions: Improved Business Case Key for Future Combat System’s Success. GAO-06-564T. Washington, D.C.: April 4, 2006. Defense Acquisitions: Improved Business Case Is Needed for Future Combat System’s Successful Outcome. GAO-06-367. Washington, D.C.: March 14, 2006. Defense Acquisitions: Future Combat Systems Challenges and Prospects for Success. GAO-05-428T. Washington, D.C.: March 16, 2005. Defense Acquisitions: The Army’s Future Combat Systems’ Features, Risks, and Alternatives. GAO-04-635T. Washington, D.C.: April 1, 2004. Issues Facing the Army’s Future Combat Systems Program. GAO-03-1010R. Washington, D.C.: August 13, 2003. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001. | Since 2003, the Future Combat System (FCS) program has been the centerpiece of the Army's efforts to transition to a lighter, more agile, and more capable combat force. In 2009, however, concerns over the program's performance led to the Secretary of Defense's decision to significantly restructure and ultimately cancel the program. As a result, the Army has outlined a new approach to ground force modernization. This report (1) outlines the Army's preliminary post-FCS plans and (2) identifies the challenges and opportunities the Department of Defense (DOD) and the Army must address as they proceed with Army ground force modernization efforts. To meet these objectives, GAO reviewed key documents, performed analyses, visited test facilities where the Army evaluated FCS equipment, and interviewed DOD and Army officials. With DOD having canceled the FCS acquisition program, the Army has moved away from FCS as the centerpiece of ground force modernization. Although the Army is still refining its post-FCS plans, it has already taken a number of actions to comply with DOD directions and define new modernization initiatives. For instance, the Army has terminated FCS vehicle development and is preparing for a new ground combat vehicle program. Also, Army officials convened a special task force to refine future force concepts and formulate an expedited fielding strategy. The Army also announced preliminary plans for new acquisition programs. With ground force modernization efforts at an early stage, DOD and the Army face the challenge of setting the emerging modernization efforts on the best possible footing by buying the right capabilities at the best value. They have an opportunity to position these efforts for success by effectively implementing the enhanced body of acquisition legislation and DOD policy reforms as well as lessons learned from the FCS program, including lessons that underscore the use of knowledge-based acquisition and disciplined contracting strategies. Preliminary plans suggest that the Army is moving in that direction, including expectations to begin future developments with mature technologies and utilizing competitive prototyping. However, DOD recently approved, with a number of restrictions, low-rate initial production of the first increment of FCS spinout equipment, such as new radios and sensors, despite having acknowledged that the systems were immature, unreliable, and not performing as required. The restrictions include required DOD reviews of Army progress toward improving the systems' maturity and reliability. The spin out equipment was being developed within the FCS program, and the decision to approve production reflects DOD and Army emphasis on providing new capabilities quickly to combat units. However, this decision runs the risk of delivering unacceptable equipment to the warfighter and trading off acquisition principles whose validity has been so recently underscored. Detailed plans for most of the Army's new modernization efforts are still being developed and may not be available until at least later in fiscal year 2010. That will be a limiting factor as the Congress considers the Army's fiscal year 2011 budget request for these modernization efforts. |
The Police Corps program was established to provide federal financial assistance to (1) prospective police officers who participate in the program (i.e., in the form of college scholarships for baccalaureate or graduate studies); (2) the entity selected and approved to provide basic training to the state’s Police Corps participants, either prior to or following completion of a bachelor’s degree; (3) the state and local law enforcement agencies that ultimately hire these individuals (i.e., they receive $10,000 per year during each of a participant’s first 4 years on the force); and (4) the dependent children of fallen officers. As of September 30, 1999, Police Corps programs were approved for 24 states and the Virgin Islands. Congress first appropriated funding of $10 million for the Police Corps program in fiscal year 1996. Police Corps funding increased to $20 million in fiscal year 1997 and to $30 million each in fiscal years 1998 and 1999. For fiscal year 2000, the appropriation directed that $30 million of available unobligated balances from COPS program funds were to be used for the Police Corps. As currently operated under OJP, the Office of the Police Corps provides funds to participating states, who in turn provide the funds to individual program participants, colleges, approved law enforcement training providers, and law enforcement agencies. In states that wish to participate, the governors must designate a lead agency that will submit a state plan to the Office of the Police Corps and administer the program in the state. Each year the Police Corps invites submission of state Police Corps program plans through a letter to the governor of each state and the appropriate official in the other eligible jurisdictions. States already approved for the program are to submit plans that describe their status, progress, and need for additional participants. Other states apply to participate by submitting a comprehensive state plan. The state plan must provide that the designated state lead agency will work in cooperation with local law enforcement liaisons, representatives of police labor and management organizations, and other appropriate agencies to develop and implement interagency agreements. The state also must agree to advertise the availability of Police Corps funds and make special efforts to seek applicants among members of all racial, ethnic, and gender groups but may not deviate from competitive standards for selection. DOJ originally placed the Office of the Police Corps under COPS, which DOJ established in 1994 pursuant to statute with the goal of funding 100,000 new community police officers by the end of the year 2000. However, because the COPS program is legislatively scheduled to end at the close of fiscal year 2000, DOJ asked for and received approval in the Conference report accompanying the Fiscal Year 1999 Omnibus Consolidated and Emergency Appropriation Bill to transfer the Office of the Police Corps to OJP. This transfer took place on December 10, 1998. To determine the extent of, and causes for, delays in Police Corps implementation, we (1) assessed COPS’ and OJP’s respective financial and management practices, (2) reviewed COPS’ and OJP’s respective legal interpretations of Police Corps’ statutory authority, (3) analyzed COPS and OJP reimbursement payment data, (4) reviewed program files at COPS and OJP, and (5) interviewed current and former Police Corps program officials as well as DOJ officials responsible for oversight. To obtain certain states’ perspective on implementation delays, we visited four states— Florida, Maryland, Oregon, and Texas. We selected Maryland and Oregon because they started their programs during the first year that the Police Corps program was funded and received the most funding. We selected Florida because a state university had been delegated state lead agency responsibility. We selected Texas because it experienced difficulty becoming fully operational due to issues concerning training program requirements. In each state we interviewed program officials representing the lead agency and the training program; in Maryland and Oregon, we interviewed representatives of law enforcement agencies that had employed Police Corps graduates. To broaden our understanding of the implementation of the Police Corps program, we also conducted structured telephone interviews with Police Corps lead agency representatives of the other 19 states participating in the program at that time (see app. III for the questions we asked). We asked officials to rate possible program problem areas on a four-point scale ranging from “not a reason” to a “very major reason.” Additionally, we conducted telephone interviews with cognizant officials in the governors’ offices of 12 nonparticipating states (see app. IV for the questions we asked). We used the same four-point scale that was used with the participating states to determine whether the possible problems affected program participation. We included an open-ended question that gave respondents the opportunity to identify problem areas not included among those we listed. To obtain information on the provision of Police Corps basic law enforcement training, determine how much assistance was being provided to law enforcement agencies and what it was being used for, and determine how many scholarships had been awarded to dependent children of fallen officers, we reviewed files and interviewed officials at COPS and OJP. In addition, we reviewed Police Corps program legislation, program guidance, correspondence files, participating states’ files, and available studies of the Police Corps program. We also interviewed current and former COPS officials and current officials at OJP. We performed our work between March 1999 and January 2000 in accordance with generally accepted government auditing standards. During its first 4 years of operation, the Police Corps program failed to fill most of the available participant slots. As shown in table 1, as of September 30, 1999, 430 (or approximately 43 percent) of the approved 1,007 participant positions had been filled. According to federal and state officials, two of the factors that contributed to this slow start were that (1) COPS dedicated insufficient staff to implement the program, which resulted in delays in providing program guidance and backlogs in processing program applications and reimbursements and (2) the Police Corps statute did not provide funding for states’ administrative or recruiting costs, which slowed program growth in some states and led several states to decline to participate in the program. In addition, statutory language led COPS to operate the Police Corps as a direct reimbursement program, which in turn made it difficult for Congress to determine the status of program funds. The Police Corps statute was enacted in 1994, and funds were specifically appropriated for the program in fiscal year 1996, when Congress provided $10 million. COPS hired a program director for the Police Corps in September 1996. In January 1997, COPS hired a program specialist to (1) receive and process student applications and service agreements; (2) develop standardized forms for student participant applications and requests for reimbursement from participants and institutions; (3) receive, record, and review requests for reimbursements; and (4) respond to inquiries from states and the general public. State officials said that the lack of COPS office staff led to delays in providing formal program guidance. According to state officials, COPS did not provide program guidance for recruiting and selecting participants until May 1997. Several state officials said that their attempts to get directions from COPS in writing or by telephone had failed. Similarly, state officials complained about backlogs in reviewing funding applications, conducting state budget reviews, and processing requests for reimbursable payments. For example, officials in all four states that we visited said that their programs experienced significant delays in receiving reimbursement from COPS for training expenditures. In an effort to secure more staffing for the program, in March 1998, COPS notified the House Committee on Appropriations of a proposed reprogramming action that would allow for an increase in staffing for the Office of the Police Corps. In April 1998, the Committee approved this proposed action. As a result COPS dedicated three full-time positions to the Police Corps to supplement the two COPS staff who were already performing Police Corps duties on a full-time basis. COPS officials said that the reason they did not devote more staff to the Police Corps program is that they interpreted their legal authority as not authorizing the payment of federal program administration costs with Police Corps funds. The Department of Justice has not provided us with the legal analysis underlying this position. As a result of this interpretation, COPS determined that it had to pay such costs from COPS operating funds. COPS officials said that, while they made an effort to provide staffing to the Police Corps program, their options were limited because the entire COPS Office was understaffed. COPS officials acknowledged that Police Corps program delays resulted in part from this understaffing. The Police Corps statute states, “There is established in the Department of Justice, under the general authority of the Attorney General, an Office of the Police Corps and Law Enforcement Education,” and the statute lays out the responsibilities of the Office. Although the Police Corps statute is silent regarding the payment of federal administrative costs, we believe that options were available to the COPS office for the payment of these costs. In our view, the COPS office could have charged the Police Corps line-item appropriations for fiscal years 1996 through 1998 to pay for these costs. A primary statute dealing with the use of appropriated funds, 31 U.S.C. 1301(a), provides that “Appropriations shall be applied only to the objects for which the appropriations were made except as otherwise provided by law.” However, it does not require, nor would it be reasonably possible, that every item of expenditure be specified in an appropriation act. The spending agency has reasonable discretion in determining how to carry out the objects of the appropriation. This concept is known as the “necessary expense” doctrine. For an expenditure to be justified under the necessary expense doctrine, three tests must be met: (1) the expenditure must bear a logical relationship to the appropriation to be charged; (2) the expenditure must not be prohibited by law; and (3) the expenditure cannot be authorized if it is otherwise provided for under a more specific appropriation or statutory funding mechanism. Under the first test, the key determination is the extent to which the proposed expenditure will contribute to accomplishing the purposes of the appropriation the agency wishes to charge. Clearly, any administrative costs incurred by COPS in implementing the Police Corps program should contribute to accomplishing the purposes of that program. Concerning the second and third tests, the payment of federal administrative costs is not prohibited by law, nor were federal administrative costs otherwise provided for under a more specific appropriation. Thus, the COPS office could have paid these administrative costs from the Police Corps’ line item appropriations. According to COPS officials, the Police Corps statute did not allow for federal reimbursement of states’ administrative or recruiting costs. State officials told us that this lack of reimbursement was the primary reason for slow progress in their programs. Under the Police Corps, a state’s designated lead agency is responsible for administering the Police Corps program in that state. The lead agency is obligated to provide overall program management, which includes developing and monitoring the state plan as well as the outreach, selection, and placement of the participants. COPS and state officials said that the lack of administrative and recruiting funds made it difficult for the state lead agencies to meet all of the statutory and policy requirements of the program. Officials in a few states said they discussed withdrawing from the Police Corps program for this reason; however, they did not do so. Officials in the four states that we visited told us that the lack of administrative and recruiting funds slowed the progress of their programs. For example, officials in both Maryland and Oregon indicated that the most serious problem they faced was lack of money for recruitment. Officials in 15 of the 19 participating states in our telephone survey said that the lack of administrative cost reimbursement was a major or very major reason for slow progress in their programs. Also, officials in 8 of the 12 nonparticipating states we contacted said that the lack of administrative cost reimbursement was a primary reason for their decision not to participate in the program. COPS officials said that they were concerned about this shortcoming of the program and made attempts to address it. In each of its three annual reports to the President, the Attorney General, and Congress, the Office of the Police Corps pointed out the need for state recruiting funds for the Police Corps program. In its April 1998 annual report, for example, the Office of the Police Corps at COPS noted that many participating states were working with limited resources and that some states were hesitant to apply to the Police Corps program because of the lack of reimbursement for expenses associated with outreach and selection. Similarly, in its April 1999 annual report, the Office of the Police Corps at OJP noted that it would be helpful if states could submit budgets and receive payment for expenses directly associated with recruitment and selection. Under COPS, the Police Corps program was operated as a direct reimbursement program. That is, program payments were made directly to an educational institution, in-service Police Corps officer, approved training provider, or participating law enforcement agency, rather than first being obligated to a state agency for subsequent disbursement. According to DOJ’s Associate Attorney General, COPS based its decision to operate the Police Corps program as a direct reimbursement on the language in the provisions of the statute itself. For example, the statute required the Director to “make scholarship payments . . . directly to the institution of higher education that the student is attending.” According to COPS officials, this resulted in large amounts of unobligated funds being carried over from one fiscal year to the next in each of the first 3 years of the program. As of March 1998, when the appropriations hearings for COPS fiscal year 1999 budget request were held, $57.8 million of the $60 million appropriated for the first 3 years remained unobligated. Under direct reimbursement, funds were not considered obligated when state plans were approved. Instead, COPS considered funds obligated only when an individual check had been sent to a participating college or university, in-service Police Corps officer, approved training provider, or police department. While COPS had committed $57.4 million of the $60 million in remaining funds, the funds were not obligated and thus were still available during annual appropriations. This caused concern during the appropriation hearings on COPS’ budget for the Police Corps. Upon assuming responsibility for the Police Corps program in December 1998, OJP increased the Police Corps staff from five to seven positions with the intention of allowing faster processing of applications and response to participants’ questions. In addition, OJP used its authority under 42 U.S.C. 3788(b) to begin establishing interagency agreements with the lead agencies in participating states. These agreements have enabled OJP to (1) obligate Police Corps’ funds at a much faster rate than COPS and (2) begin to make a formula-based payment that may be used to, among other things, help defray states’ administrative and recruiting costs. While these agreements should help, OJP continues to hold to the view, expressed in its 1999 annual report to Congress, that it would be helpful if states could submit budgets and receive payment for expenses directly associated with recruitment and selection. Once a state plan was approved by OJP, the state was to submit a budget to cover estimated payments to participants, colleges or universities, approved training providers, and police departments during the upcoming fiscal year. The interagency agreement contractually allowed for transfer of these funds, along with the formula-based payment, from OJP to the state lead agency once the budget had been approved. Funds were to be obligated at the time an agreement was signed. The interagency agreements obligated money that was committed but unobligated in the previous years under COPS, as well as money from the 1998 and 1999 appropriations. As of September 30, 1999, OJP had signed interagency agreements with 16 states. As shown in table 2, COPS obligated $7.6 million of the $90 million appropriated for the Police Corps program in fiscal years 1996 through 1999. OJP was reimbursed for the remaining $82.4 million in unobligated funds beginning in December 1998. As of September 30, 1999, OJP had obligated $51.3 million of these available funds, which left $31.1 million still unobligated. As a part of its interagency agreements with state lead agencies, OJP has begun to make formula-based payments to state lead agencies that can be used to help defray their administrative and recruiting costs. OJP is doing this under the authority of 42 U.S.C. 3788(b), which allows it to enter into interagency agreements with states on a reimbursable basis. Because 42 U.S.C. 3788(b) did not apply to the COPS office, this method of making reimbursements was not available to COPS. Under these interagency agreements, the state lead agencies are to assume primary responsibility for approving and paying Police Corps program expenditures. Under COPS, implementation of the Police Corps program got off to a slower than expected start, and the majority of participant slots remained unfilled. This state of affairs was due to a variety of causes, some of which stemmed from COPS failure to provide federal administrative funds and adequate staffing for the program, and others—such as the fact that the Police Corps statute did not provide funding for states’ administrative and recruiting costs—that were out of its control. COPS transferred the Office of the Police Corps to OJP in December 1998. While OJP has made significant progress in obligating funds and establishing interagency agreements with the participating states, it is too soon to tell whether OJP will succeed in increasing the number of participant slots filled and continue to provide guidance. We provided a draft of this report to the Attorney General for comment. DOJ responded that it had no official comment. However, we met with representatives of the COPS Office and OJP, who provided technical comments on the draft. We incorporated their technical comments where appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 10 days from the date of this report. At that time we will send copies of this report to the Honorable Ernest F. Hollings, Ranking Minority Member, Senate Subcommittee on Commerce, Justice, State, the Judiciary, and Related Agencies; and the Honorable Strom Thurmond, Chairman, and the Honorable Charles Schumer, Ranking Minority Member, Senate Judiciary Subcommittee on Criminal Justice Oversight. We will also send copies to the Honorable Harold Rogers, Chairman, and the Honorable Jose E. Serrano, Ranking Minority Member, House Appropriations Subcommittee on Commerce, Justice, State, the Judiciary, and Related Agencies; the Honorable Bill McCollum, Chairman, and the Honorable Robert C. Scott, Ranking Minority Member, House Judiciary Subcommittee on Crime; and the Honorable Janet Reno, Attorney General. We will make copies available to others upon request. If you or your staff have any questions concerning this report, please contact me or Weldon McPhail on (202) 512-8777. Major contributors to this report are acknowledged in appendix V. The Police Corps Act provides funding for basic law enforcement training that is to go well beyond the “minimum standards” training available to police officers in many states. The philosophy of Police Corps training is that to serve effectively on the beat in some of America’s most challenged communities, Police Corps officers must have a solid background in traditional law enforcement, strong analytical abilities, highly developed judgment, and skill in working effectively with citizens of all backgrounds. Police Corps training is to emphasize ethics, community and peer leadership, honesty, self-discipline, physical strength and agility, and weaponless tactics—tactics to protect both officer and citizen in the event of confrontation. This philosophy is reinforced through a statutory requirement that Police Corps participants receive a minimum of 16 weeks of basic law enforcement training either prior to or following college graduation. This was being carried out or planned in all of the participating states. In 1998, the Police Corps Act was amended to give states the option of providing an additional 8 weeks of federally funded Police Corps training. While not specifically required by statute, the Guidelines for Training issued by the Office of the Police Corps require participating states to provide law enforcement training in a residential, live-in facility. All of the participating states required or planned to require such training. However, officials in 6 of the 19 states we surveyed indicated that the requirement that training be conducted on a live-in basis, rather than in an 8-hours-per- day nonresidential facility, was a major reason for the slow progress of their Police Corps programs, as they did not have facilities readily available for this purpose. Nine of the 19 participating states in our telephone survey indicated that their Police Corps training preference would be nonresidential or a combination of both residential and nonresidential. The Office of the Police Corps provides financial assistance to state and local law enforcement agencies as an incentive to employ Police Corps participants. Law enforcement agencies that employ Police Corps officers are to receive $10,000 per participant for each year of required service, or $40,000 for each participant who fulfills the 4-year service obligation. As of September 30, 1999, 163 Police Corps participants had completed their degrees and training and were serving in police agencies in 7 states— Kentucky, Maryland, Mississippi, Missouri, North Carolina, Oregon, and South Carolina. As of this same date, state and local police departments with Police Corps officers on the beat had received $960,000 in assistance. The Police Corps statute did not place any restrictions on how police departments could use this provided assistance. As a result, the police departments we contacted were using these funds for various purposes. Officials in one police department, for example, said they used the assistance money to cover the expenses of recruiting and selecting officers. Another police department used the funds to employ 10 additional police officers. Officials in one state said they placed assistance money in the general funds to pay police officers’ salaries. Table 3 shows Police Corps law enforcement payments to the states that had received payment at the time of our review and how these states used the provided funds. The Police Corps program offers college scholarships to dependent children of police officers killed in the line of duty after the date a participating state joins the program. An eligible dependent may receive up to $30,000 for undergraduate study at any accredited institution of higher education in the United States. Dependent children in this category incur no service or repayment obligation. The application process is noncompetitive. For fiscal years 1996 and 1997, the Office of the Police Corps budgeted sufficient funds to provide 68 scholarships. As of September 30, 1999, 26 of these scholarship positions remained unfilled. According to Police Corps officials, the program was making a strong effort to identify and inform qualified persons about the availability of these scholarships. State Maryland Has participated in the Police Corps Lead agency: The Governor’s Office on Crime Control and Prevention. Program funding and accomplishments As of September 30,1999, the Maryland Police Corps program had been approved for $10.2 million in funding and 140 participant positions. Seventy-eight of these positions had been filled as of that date. The fiscal year 2000 OJP Interagency Agreement with Maryland authorizes 30 additional participant positions and approximately $4.3 million for costs associated with the 170 participant positions approved to date. Other participants include the Baltimore Police Department (BPD) and the University of Maryland’s Shriver Center, which manages program training. The Police Corps program is seen as a vehicle for broad-based improvements in Maryland policing. The BPD had received $280,000 in assistance payments, which it used to pay the salaries of the 28 Police Corps graduates it had hired. An additional 24 officers had not served long enough for BPD to be eligible for assistance payments. As of September 30,1999, six dependent children of officers killed in the line of duty had received $84,584 in scholarships. Program limitations According to Maryland officials, the lack of reimbursement for administrative and recruitment costs limited the program’s ability to fill participant positions. Operation of the program on a reimbursable basis required detailed voucher support, which increased both the state’s unfunded administrative burden and the administrative burden at the COPS office, which was understaffed. The resulting delays in reimbursement resulted in loss of interest income by the state for the up-front funding of training expenditures. At the beginning of the program, Maryland assumed the task of developing a Police Corps model-training program. The contractor, Science Applications International Corporation, failed to produce a curriculum acceptable to the Office of the Police Corps at COPS. This resulted in COPS’ deferral of approval of Maryland’s 1997 request for 240 additional participant positions and postponement of its scheduled training. Background Has participated in the Police Corps program since 1996. Lead agency: The Oregon State Police Criminal Justice Services Division. Program funding and accomplishments As of September 30, 1999, Oregon’s Police Corps program had been approved for $5.1 million in funding and 80 participant positions. Sixty-nine positions had been filled as of September 30,1999. The fiscal year 2000 OJP Interagency Agreement with Oregon authorizes 100 additional positions and approximately $2.8 million for costs associated with the 180 participant positions approved to date. Other participants include the Oregon Board on Public Safety Standards and Training and the Portland Police Bureau. Program limitations Oregon officials attributed slow program progress to the lack of a formal contractual agreement between COPS and the state, the lack of reimbursement for administrative and recruitment costs, and delays in reimbursement of training- related expenses. The Police Corps program is seen as a way to reduce juvenile gang violence through community policing. The Portland Police Bureau had received $380,000 for employing 38 Police Corps graduates as of that date. Financial support from the Oregon Department of State Police ($50,000) and the Portland Police Bureau ($385,000) enabled Oregon’s Police Corps program to overcome the lack of reimbursement for administrative and recruitment costs. First participated in the program in 1998. (The Florida Department of Law Enforcement, which initially considered the program, declined to participate in 1996 and 1997 due to the lack of reimbursement of administrative costs, the limiting of the police service requirement to 4 years, and the limited number of training slots, among other reasons.) As of September 30, 1999, Oregon provided two dependent children of officers killed in the line of duty with $41,086 in scholarships. As of September 30, 1999, Florida’s Police Corps program had been approved for $2.1 million in funding and 30 participant positions. The fiscal year 2000 OJP Interagency Agreement with Florida authorizes 30 additional participant positions and approximately $3.0 million for costs associated with the 60 positions approved to date. Lead agency: Florida State University’s (FSU) School of Criminology and Criminal Justice. In its 1998 plan, Florida indicated its first 30 recruits would start community patrol in May/June 1999. However, various problems (see Limitations) have pushed back Florida’s Police Corps program, and as of December 1999, a program official indicated that 15 to 20 college graduates were expected to attend Florida’ s first training session, scheduled for March 2000. Other participants include the Duval and Hillborough County Sheriffs Departments and the Tampa and Tallahassee Police Departments. According to Florida program officials, the lack of agreement between Florida and COPS on reimbursement of administrative and recruitment costs resulted in many of the 30 participant positions authorized in the 1998 plan remaining unfilled and postponement of planned training sessions. The FSU Contracts and Grants Department did not believe COPS’ approval of its plans was sufficiently authoritative to establish a funded cost account for the Police Corps program. To overcome the lack of administrative and recruitment cost reimbursement, FSU was able to obtain $50,000 from the Florida Department of Law Enforcement to establish a Police Corps account in the FSU Contracts and Grants Department and start recruitment and curriculum development. The objectives of the Florida Police Corps program are to (1) recruit college graduates of exceptional promise into the Police Corps, (2) provide an exemplary program of training, and (3) broaden the state’s commitment to community policing. As of September 30, 1999, Florida had not awarded any scholarships to children of officers killed in the line of duty. Background Texas has participated in the Police Corps program since 1997. Program funding and accomplishments As of September 30, 1999, the Texas Police Corps program had been approved for $3.3 million in funding and 60 participant positions, 44 of which had been filled. Six participants had received their degrees but had yet to be trained. Lead agency: Texas Commission on Law Enforcement Officer Standards and Education. The state has responsibility for curriculum and training in 105 licensed academies. The commission is also responsible for Police Corps program administration. As of September 30, 1999, two dependent children of officers killed in the line of duty had received $34,569 in scholarships. The Police Corps program is seen as a way to address the state legislature’s concerns about the need for more and better trained officers in small, rural, geographically remote law enforcement agencies. Program limitations According to Texas officials, state Police Corps program limitations included lack of administrative funding, inadequate procedures for handling student vouchers, lack of a standardized training curriculum, and inexperienced staff. According to Texas officials, as of December 1999, Texas had yet to conduct any training due to the lack of a standard Police Corps training curriculum and the Police Corps residential training requirement. One graduate is slated to attend training in Mississippi while Texas is in the process of establishing its own training academy. As of December 1999, several participants had withdrawn from the program because of training delays. Following is an example of the questionnaire for participating states. Interviews were conducted by telephone. Hello. My name is __________ and I’m with the U.S. General Accounting Office (GAO), the investigative agency of the U.S. Congress. I’m calling to speak with ______________________, whose name was provided by the Department of Justice as a point of contact for your state’s Police Corps Program. Initial Point of Contact: Provide the following information about the initial point of contact. Lead Agency: School of Criminology and Criminal Justice FSU Police Corps Web site: _ Provide the following information about the alternate point of contact. When you have the right person on the phone, proceed with. Hello. My name is ___________, and I’m with the U.S. General Accounting Office (GAO), the investigative agency of the U.S. Congress. We are conducting a study of the Police Corps Program, which was part of the Violent Crime Control and Law Enforcement Act of 1994. Senator Judd Gregg, Chairman of the Subcommittee on Commerce, Justice, State, the Judiciary and Related Agencies requested this study. The Chairman is most interested in knowing how the Department of Justice (DOJ) has managed program funds. Specifically, the subcommittee is concerned about how funds were obligated during the first 3 years of the program. We were also asked to review the program areas of training, assistance to law enforcement agencies, scholarships to dependent children, and student education. Are you the person I should interview? ( If not, obtain alternate interviewee information and provide above.) A. I’d like to conduct a structured interview with you that should take about 20 minutes. Do you have time to speak with me now? Yes ( ) No ( ) B. When would be a good time for me to call you back? Date and time: ___________________________________ 1. In what year did your state first apply for participation in the Police Corps Program? 2. When was your state plan first approved? Date (mo. and yr.) 3. Did your state conduct a feasibility study or any other analysis for participating in the Police Corps Program? Don’t Know…………………………………...3 4. Request a copy of the feasibility study (and/or other supporting data that is available) be sent to: U.S. General Accounting Office Suite 1010 World Trade Center 350 South Figueroa Street Los Angeles, CA 90071 5. Was your first plan approved in full or was approval conditional? Full approval ……………6= 32% Conditional approval ….13 = 68% 6. In what areas did DOJ impose conditions? 7. Did the changes required of your plan by DOJ delay the start of your program? If yes, how long in months? 8. I am going to read to you a list of reasons why states may not have made faster progress in the start-up of their Police Corps program. For each reason I read, please indicate whether it was a very major reason, a major reason, a minor reason, or not a reason at all. (Comments provided below.) 9. Did your state Police Corps program experience delay by DOJ in any of the following areas? If yes, to any area please provide comment(s) and also send any available supporting documentation to Marco Gomez (see question 4 above). 10. Also, if “yes,” did any of the delays cause adverse impact to your state’s Police Corps program? If yes, please explain: 11. Is your state’s Police Corps training residential, nonresidential, or a combination of both? Nonresidential……………… .……………….…( ) Combination of residential and nonresidential.. 2 12. Does DOJ require residential training? Cont. with qst. 13. No……….……………………..….. 1 Skip to qst. 15. Don’t know……………………...…2 Skip to qst. 15. 13. If “yes,” does your state agree with the emphasis on residential training? 14. What is your state’s training preference, residential or nonresidential? Combination residential and nonresidential…... 7 Don’t know……………………………………….. 1 15. Does Police Corps training cover your state’s POST requirements? Don’t know………………………………….…( ) 16. If not, is additional training required for your state’s Police Corps graduates? Not applicable …………………. .16 17. In which of the following ways does your state promote the Police Corps program? ( Read options, and check all that apply. ) Job fairs …………………………………...11 Campus recruitment ……………………. ..8 Other(s) …………………………………….7 List other(s) Recruitment is continuous, on-going 18. Does your state conduct outreach to children of officers killed in the line of duty? Yes……………………………….16 Cont. with qst. 19 No………..……………………….. .2 Skip to qst. 20 Don’t know………………………...0 Skip to qst. 20 Not applicable…………………….1 19. Does your state do outreach to dependent children through: (Read options) General state wide publicity ……………..…0 Please explain how your state meets the requirement to recruit minorities and women? Do you have any other comment about the program you care to share with us? Thank you very much for your help, good-bye. Following is an example of the questionnaire for nonparticipating states. Interviews were conducted by telephone. Hello. My name is ______________, and I’m with the U.S. General Accounting Office, the investigative agency of the U.S. Congress. At the request of Congress, we are conducting a study of the Department of Justice Police Corps Program that was included as part of the Violent Crime Control and Law Enforcement Act of 1994. I would like to speak with a representative of name state who could answer questions about the Department of Justice’s outreach to name state and the reasons name state is not participating in the program. Are you the right person to speak with? (If not, determine who is. ) A. I’d like to conduct a structured interview with you that should take about 10 minutes. Do you have time to speak with me now? Yes…………………………( ) Go to question 1 No………………………….( ) B. When would be a good time for me to call back? Enter the following information about the interviewee. 1. I am going to read to you a list of reasons why states may not participate in the Police Corps program. For each reason I read, please indicate whether it was a very major reason, a major reason, a minor reason, or not a reason at all for why your state decided not to participate in the program. 2. Did name state prepare a feasibility study for participating in the Police Corps Program? Yes…………………………. ………..(4) No…………………………………….(7) Don’t know…………………………..(1) If “yes” in question 2, read: 3. Are there data available, other than the feasibility study, in support of the reasons cited above? Yes……………………(0) next page No……………………(12) If yes, request that a copy of the feasibility study (and/or other supporting data that is available) be sent to: Marco F. Gomez USGAO Suite 1010 World Trade Center 350 Figueroa St. Los Angeles, Calif. 90071 OR faxed to 213-830-1180 Ask if there are any other comments about the Police Corps program you care to share with us: _________________________________________________________________ _____________ Thank you very much for your help. In addition to those named above, James Moses, Marco Gomez, Jan Montgomery, Nancy Finley, and Michael Little made key contributions to this report. Ordering Copies of GAO Reports The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists. Viewing GAO Reports on the Internet For information on how to access GAO reports on the INTERNET, send e-mail message with “info” in the body to: or visit GAO’s World Wide Web Home Page at: Reporting Fraud, Waste, and Abuse in Federal Programs To contact GAO FraudNET use: Web site: http://www.gao.gov/fraudnet/fraudnet.htm E-Mail: fraudnet@gao.gov Telephone: 1-800-424-5454 (automated answering system) | Pursuant to a congressional request, GAO reviewed the Department of Justice's (DOJ) implementation of the Police Corps program under the Community Oriented Policing Services (COPS) office and, more recently, the Office of Justice Programs. GAO noted that: (1) the Police Corps program got off to a slower than expected start resulting in the majority of participant slots remaining unfilled; (2) as of September 30, 1999, 433 of the 1,007 participant positions funded for fiscal years 1996 through 1998 had been filled; (3) according to federal and state officials, two of the factors that contributed to this slow start were as follows: (a) COPS dedicated insufficient staff to the Police Corps program, which led to delays in providing program guidance, processing program applications and payments, and answering participants' questions about the program; and (b) the Police Corps statute did not provide funding to pay states' costs for program administration or for recruitment and selection of program participants; (4) COPS operation of the Police Corps as a direct reimbursement program made determining program status difficult, as it slowed the rate at which funds were obligated; (5) according to a DOJ official, COPS based its decision to operate the Police Corps program as a direct reimbursement program on the language of the statute; (6) under direct reimbursement, funds were not considered obligated when state plans were approved; (7) instead, COPS considered funds obligated only when an individual check had been sent to a college or university, in-service Police Corps officer, approved law enforcement training provider, or participating police department; (8) on December 10, 1998, responsibility for the Police Corps program was transferred from COPS to OJP; (9) OJP devoted seven full-time staff positions to process program applications and payments and respond to participant queries faster; (10) under the authority granted OJP under 42 U.S.C. 3788(b), which allowed OJP to enter into interagency agreements with states on a reimbursable basis, OJP opted, through the use of such agreements, to make a formula payment that can be used to help defray states' recruiting and administrative costs; (11) this authority was not available to COPS; (12) while these interagency agreements only recently went into effect, they should make money more readily available to states trying to implement their Police Corps programs; (13) as of September 30, 1999, OJP had obligated $51.3 million of the $82.4 million available to the program; and (14) it is too early to determine the effects of the transfer of the Police Corps program from COPS to OJP on the factors contributing to the slow start. |
The federal government’s real property portfolio reflects the diversity of agencies’ missions and includes a variety of building types, such as office buildings, courthouses, post offices, hospitals, prisons, laboratories, border stations, and park facilities. That portfolio includes many historic buildings held by GSA, NPS, and VA. GSA and NPS’s real property policies place an emphasis on historic building stewardship. GSA serves as broker and property manager for many civilian federal agencies while NPS manages the nation’s national park system for current and future generations. GSA established its “Legacy Vision” in 2002 as a strategy for meeting its federal historic stewardship responsibilities and declared a policy preference for using, preserving, and leasing historic buildings. Similarly, in providing for the stewardship of the nation’s cultural resources, NPS reinforced its commitment and goal to preserve historic buildings in its 2011 Call to Action: Preparing for a Second Century of Stewardship and Engagement. VA’s core mission is to provide care and services to the nation’s veterans. While hospital buildings, some of which are historic, are critical to providing healthcare services, VA began to realign its real property portfolio in 2004 to better respond to the demographic shifts and evolving needs of both its older and younger veteran populations. As VA realigns its portfolio, it has determined many of its historic buildings are not suitable to support modern healthcare delivery and are now inactive or excess to VA’s needs. In NHPA, Congress expressed concern that historic properties significant to the nation’s heritage—which include both public and privately owned buildings—were being lost or substantially altered. Thus, NHPA authorized the Secretary of the Department of the Interior to maintain and expand the National Register as a means of identifying historic properties, including those owned by the federal government, and NHPA requires federal agencies to identify and nominate their historic properties to the National Register. The National Register is comprised of many different types of historic properties to include historic districts, sites, buildings, structures (such as a bridge), and objects (such as a fountain) that are significant to American history, architecture, archaeology, engineering, and culture. A building is generally not eligible for National Register listing until it is at least 50 years old, unless its historic significance is considered exceptional. While many federal buildings are historic because of the passage of time and a corresponding recognition of their historical or architectural significance locally or regionally, a smaller subset are treasured assets considered significant to the nation’s history. In recognition of this, the National Register also includes buildings meeting the criteria for a national historic landmark. National historic landmarks are designated by the Secretary of the Interior as possessing exceptional value or quality in representing the heritage of the nation. The NPS reports that public and privately held national historic landmarks constitute more than 2,400 of almost 87,000 “entries” (i.e., listings) in the National Register (nearly 3 percent). Other statutory and regulatory provisions also govern agencies’ stewardship of their historic buildings. For example, agencies are required to: (1) assume responsibility for the preservation of their historic properties; (2) consider, when a historic property is no longer needed, alternative uses or lease to persons or organizations if the action will preserve the property; and (3) consult with ACHP and non-federal stakeholders, such as state and tribal historic preservation officers, before undertaking actions that may affect a historic property listed or eligible for listing on the National Register. NHPA does not mandate a particular governmental decision, but instead mandates a particular process for reaching decisions. The Secretary of the Interior’s Standards and Guidelines for Federal Agency Historic Preservation Programs indicate that where it is not feasible to maintain a historic building or to rehabilitate it for contemporary use, an agency may decide to modify it in ways that are inconsistent with the Secretary’s treatment standards, limit maintenance and repair investments in the building, or demolish it. Such a decision can be reached only after following appropriate consultation with stakeholders as required by NHPA. Federal historic buildings that are declared surplus may be made available for other uses, such as a public benefit conveyance. Under the public benefit conveyance program, state or local governments and certain tax-exempt nonprofit organizations can obtain surplus federal real property, including historic buildings, for an approved public benefit use, such as for educational facilities or to assist the homeless. Additionally, property declared excess to the federal government’s need may be sold. Certain land-holding agencies have independent authority to sell real property. In addition to NHPA and other statutory and regulatory provisions, several Executive Orders also provide guidance for the management of federal historic buildings. Executive Order 13287, Preserve America, sets forth federal historic stewardship requirements, including, requiring executive agencies to report to ACHP on their efforts to identify, protect, and use historic federal properties, which include historic federal buildings. In fiscal year 2011, GSA reported to ACHP that more than one-third of the 1,676 federally-owned buildings under its custody and control are more than 50 years old and 479 buildings are listed on or eligible for the National Register. In fiscal year 2011, NPS reported to ACHP that it held historic properties totaling 26,636 buildings and structures (out of more than 70,000 real property assets). It further indicated that among these historic properties, 1,482 are listed on the National Register. In fiscal year 2011, VA did not provide to ACHP a reporting of the number of historic buildings that it owns or has listed on the National Register. Executive Order 13514, Federal Leadership in Environmental, Energy, and Economic Performance, requires agencies to ensure, among other things, that new construction, major renovations, repairs, and alterations of federal buildings comply with the Guiding Principles for Federal Leadership in High Performance and Sustainable Building (Guiding Principles), such as optimizing energy performance and conserving water. Also, agencies should ensure, when rehabilitating historic buildings, that sustainable technologies (to achieve energy and environmental conservation goals) are used to promote the long- term viability of the buildings. We have reported that the Office of Management and Budget (OMB) has incorporated information about agencies’ progress in implementing those green building requirements into scorecards that OMB uses to rate agencies’ performance, but that agencies face challenges. Executive Order 13327, Federal Real Property Asset Management, requires GSA to collect data from executive branch agencies describing the nature, extent, and use of federal real property. The data are reported within FRPP and include data on the historic status of federal buildings and other data, such as a building’s condition and if the building meets the Guiding Principles. The FRPP is maintained by GSA on behalf of FRPC. The FRPP database includes approximately 400,000 buildings that are owned and leased by the federal government, many thousands of which have been determined to be historically significant. GSA releases a summary level FRPP report each fiscal year to provide an overview of the federal government’s real property, but those reports do not identify how many historic federal buildings are held by individual agencies or the executive branch as a whole. GSA and FRPC require agencies to update their FRPP data on a fiscal year basis. FRPP guidance to agencies on coding historic status indicates agencies should code their federally owned buildings within FRPP as one of the following: national historic landmark, National Register listed, National Register eligible, non-contributing element of a national historic landmark/National not evaluated, and evaluated, not historic. The three agencies we reviewed are taking steps to improve the management of their historic buildings. For example, all three agencies have undertaken portfolio-wide management initiatives directed at nominating historic buildings in their portfolios to the National Register as required by NHPA. In addition, among the buildings we reviewed, we found examples where agencies were utilizing their historic buildings to the extent feasible for current mission needs. We also found that when the buildings were no longer suitable for current mission purposes, agencies were leasing all or part of some historic buildings to non-federal entities, as authorized by NHPA and other real property authorities. Also, we found that GSA, NPS, and VA were implementing projects in some historic buildings to improve their sustainable performance. All three agencies have undertaken efforts in recent years to identify the historic buildings across their real property portfolios, nominate those buildings to the National Register, and are working to manage those buildings in an effort to comply with the requirements of NHPA and the executive orders. For example, GSA started a multiyear initiative in 2004 to assess many of its older buildings that it believed were eligible for listing on the National Register. GSA officials reported this effort is nearing completion and has resulted in National Register nominations for more than 150 buildings. In particular, GSA evaluated the National Register eligibility for all of its legacy monumental buildings as part of this effort. These include buildings such as courthouses, post offices, and agency headquarters, which were designed to serve symbolic, ceremonial, and functional purposes. In addition, GSA’s “Legacy Vision” policy in 2002 laid the groundwork for the agency’s current stewardship efforts that are focused on the preservation, use, and disposal of historic buildings, as appropriate. GSA also used American Recovery and Reinvestment Act of 2009 (Recovery Act) funding to rehabilitate and modernize 150 of its historic buildings. These projects were intended to address GSA’s historic building repair and alterations backlog and ranged from comprehensive modernizations of entire buildings, such as the Federal Building at 50 United Nations Plaza in San Francisco, California, to limited scope sustainability projects such as the roof replacement project at the Milwaukee Federal Building and Courthouse in Wisconsin. Similarly, NPS implemented an agreement with ACHP to address NHPA compliance and streamline consultation for its projects at national parks nationwide. NPS also recently completed its first 5-year cycle of comprehensive condition assessments on what NPS termed as a critical subset of its buildings that included accessibility assessments to identify barriers to disabled persons. This information will be used to prioritize preservation and improvement of its historic buildings, among others, and help ensure compliance with federal accessibility requirements. In addition, NPS published The Secretary of the Interior’s Standards for Rehabilitation and Illustrated Guidelines on Sustainability for Rehabilitating Historic Buildings in 2011, which, for example, outlined approaches for improving the energy efficiency of historic buildings while preserving their historical character. To enhance the management of its portfolio of historic buildings, VA began two multi-year national studies of 90 of its medical centers that resulted, as of April 2012, in 45 in-process or recently completed National Register district nominations. In addition, five individual medical center National Register nominations not associated with these studies were completed and four VA campuses were designated national historic landmarks in the last 3 years. Further, VA recently updated its policies and procedures governing historic preservation including identifying and evaluating historic properties, and complying with various historic preservation laws and regulations. In 2011, VA also completed a review and identified unused and underused buildings—many of which are historic—with the potential to develop, through public-private partnerships, affordable housing for homeless or at-risk veterans and their families. VA has also developed training on NHPA requirements for VA field staff. This training, for example, focuses on the need to consult with stakeholders such as the ACHP and state historic preservation officers as required by NHPA and its implementing regulations. Among the buildings we reviewed, we found examples where GSA, NPS, and VA preserved, used, and adapted historic buildings to meet their current mission needs. When historic buildings were either excess or unsuited for mission needs, we found several instances in which agencies leased part or all of a building to a non-federal entity that could use the building while preserving its historic character. As noted previously, these cases provide useful insights into agency actions related to historic preservation but are not generalizable to agencies’ actions across their historic building portfolios or across the government. Among the 31 historic buildings we reviewed, we found: 20 buildings that were used by the federal government; 5 buildings that were used, in part, by federal agencies while some space was leased within those buildings—or used under a cooperative agreement—by non-federal entities; 4 buildings that were leased in their entirety to non-federal entities; 2 buildings that were vacant. Continued use of historic buildings to meet mission needs often involves balancing the need to modernize building systems, such as mechanical systems, while preserving historical features. For example, in its renovation of the Stewart Udall Building in Washington, D.C. (Department of the Interior headquarters), GSA installed fire-rated emergency egress stairs within office space that is not historically significant in a manner that preserved the building’s historic corridors. In another example, GSA repaired and restored nearly 500 historic wooden windows in the Milwaukee Federal Building and U.S. Courthouse—built in 1899—in Wisconsin, while also retrofitting the window frames with modern insulated glass. Figure 1 shows a representative window before the rehabilitation (left), a window after the rehabilitation (center photos), and an exterior view of the building (right). Among the buildings we reviewed, we found examples where agencies sought to lease historic buildings that were not used for mission needs to non-federal entities that could fund their preservation, maintenance, and repair, and use them in ways that were sometimes supportive of the agency’s mission. As previously mentioned, the selected agencies were leasing out either all or part of 9 of the 31 historic buildings we reviewed. For example, when the Golden Gate National Recreation Area was created in 1972 in California, a number of Department of Defense installations were transferred to NPS, including Fort Mason. Through a public-private partnership, NPS has leased the 100-year old Pier #2 Shed at Fort Mason—a former military warehouse that served as an embarkation point for the U.S. Army during World War II—to a non-profit group that facilitates performing arts events within the building, thus providing a cultural resource to the public consistent with NPS’s mission. See figure 2. When a federally owned historic building becomes underutilized because it no longer serves mission needs, agencies may sell a building or exchange it for comparable historic property so long as the exchange will ensure the preservation of the historic property. Although the 31 buildings we reviewed did not include any executed sales or exchanges, at the time of our review, NPS was considering the sale or exchange of Old City Hall—at Lowell National Historic Park in Lowell, Massachusetts—which NPS has leased to a commercial bank for more than 25 years. Outside the 31 historic buildings included in our review, examples of sales or exchanges of federally owned historic buildings include a former U.S. Courthouse in Cedar Rapids, Iowa, that was transferred to the City of Cedar Rapids in 2010 in exchange for a site to support the construction of a new federal courthouse. In addition, the building that formerly housed the Immigration and Naturalization Service in Seattle, Washington, was sold at auction in 2008 for $4.4 million. In a third example, GSA sold a historic building in Washington, D.C., in 2001 that housed the Clara Barton Apartment and Missing Soldiers Office. The sale included a preservation easement that provides for the operation of a museum within the building by a non-profit group to recognize Clara Barton’s efforts to aid Civil War soldiers. In addition to efforts to preserve and use historic buildings, we found that GSA, NPS, and VA were implementing projects in some of the historic buildings we reviewed to improve the sustainable performance of the buildings and begin meeting the Guiding Principles, as required by Executive Order 13514. Because buildings and their sites affect the natural environment, the economy, and the health of people that use them, the Guiding Principles established a common strategy for federal agencies to use for planning, designing, constructing, and operating their buildings. More specifically, the Guiding Principles address five performance goals: optimizing energy performance; integrating the planning, design and construction process; enhancing indoor environmental conditions such as air quality; and reducing water consumption and storm runoff; reducing the environmental impact of materials used to construct and operate buildings. Sustainable projects we observed at some of the buildings we visited included improved energy-efficient heating and cooling systems, “green” vegetated and “white” reflective roofs, and window retrofits or replacements, among others. For example, as part of the modernization of the federal building located at 10 West Jackson Boulevard in Chicago, Illinois, GSA is installing energy-saving “daylight harvesting” technology that automatically adjusts office lighting according to the amount of natural light entering through the building’s windows. GSA recently installed both green roof and white reflective roof technologies to reduce the amount of heat gain and loss through the building’s roof. See figure 3 below. According to OMB’s January 2012 sustainability scorecards for the agencies, GSA’s and VA’s efforts to assess and incorporate sustainable Guiding Principles in their buildings are on track, but Department of the Interior’s effort is not. While NPS has a Sustainable Buildings Implementation Plan, agency officials report that they have not assessed many of their historic buildings to determine if they currently meet the Guiding Principles. NPS officials told us that their efforts thus far to comply with the Guiding Principles have focused largely on new construction projects—such as new visitor centers—or on existing building rehabilitations rather than on historic buildings where there are no planned projects. While NPS is not currently on track to meet the Guiding Principles as assessed by OMB, we found that NPS had implemented sustainable projects in 9 of 13 NPS buildings we visited and, in some cases, had conducted energy audits to identify where future projects could improve the sustainable performance of its historic buildings. We discussed the issues faced by federal agencies to make historic buildings sustainable with three outside experts. Based on our discussions with those experts—and a review of professional articles written by those individuals—we found that many historic buildings may be inherently sustainable. In general, historic buildings built before World War II often incorporated many sustainable principles, such as orienting a building for solar efficiency and making effective use of natural light and ventilation. In addition, generally, the rehabilitation and reuse of a historic building consumes fewer raw materials and affects the environment to a lesser degree than constructing a new building of comparable size. All three experts indicated that NPS has generally been an effective advocate for disseminating information about incorporating sustainable improvements in historic buildings. They identified actions—by NPS’s National Center for Preservation Training and Technology or its Technical Preservation Services—to make sustainable green building information available to the preservation community such as NPS’s recently released technical preservation brief entitled Improving Energy Efficiency in Historic Buildings. Independent of our discussion with these experts, we found that GSA and VA have been partnering with NPS as well as other federal agencies—such as the Department of Energy—to further advance federal initiatives aimed at improving the sustainability of historic federal buildings. The three agencies we reviewed face challenges related to the functionality of historic buildings, the amount of funding available for preservation projects, and federal requirements to consult stakeholders on historic preservation. Maintaining and making historic buildings functional for contemporary purposes in a constrained budget environment poses a challenge. Also, competing stakeholder interests can arise when agencies consult with stakeholders. Compounding these challenges, agencies are required to identify and report on their historic buildings and their reported data is not consistent and complete. Functional and budgetary limitations as well as competing stakeholder interests have been long-standing challenges in the area of federal real property management and to agencies’ efforts to preserve historic buildings. These challenges are significant for the selected agencies given they have reported that identified historic buildings represent approximately 25 to 30 percent of their buildings. Based on our site visits and discussions with agency officials, we found that the three agencies faced challenges in rehabilitating and modernizing historic buildings for contemporary use because of age, specific characteristics of buildings’ original designs, and their particular historical features. For instance, it can be difficult to address current building codes in some historic buildings, install modern building systems—particularly with regard to heating, ventilation, and cooling—and provide access for disabled persons. For example, NPS officials reported they have not been able to improve accessibility at the John F. Kennedy house in Brookline, Massachusetts—which now serves as a museum—as it would adversely affect the historic character of the building. In lieu of being able to make accessibility improvements to the John F. Kennedy house’s narrow hallways and stairwell, NPS officials report they are considering leasing space in a nearby commercial building to create a new visitor center that could accommodate visitors with special accessibility needs and provide additional interpretation exhibits for all visitors. Similarly, in the case of the Old Grist Mill—built sometime around 1735— at VA’s Perry Point Medical Center in Maryland, installing modern building systems and making building code improvements—such as adding a bathroom and a stairwell—will be challenging. VA’s plan to reuse the building as a training facility will require penetrating some historic beams and floors. See figure 4 for representative exterior and interior photos of the Old Grist Mill as it exists today and the proposed design for the adaptive reuse of building. We have previously reported on how real property funding limitations— such as funding needed to maintain, repair, and modernize federal buildings—have been a long-standing challenge for agencies and that agencies’ actions to defer maintenance have resulted in large backlogs of deferred maintenance and the deteriorated condition of some federal buildings, including historic buildings. Agencies’ total annual budgets allotted for historic preservation are difficult to determine because funding requested to implement projects to maintain, repair, rehabilitate, and modernize historic buildings is dispersed across multiple budget accounts. Projects are identified within agencies’ budgets as line items, and funding for historic preservation can also be allotted in programmatic and operating budget accounts for conducting other activities like routine maintenance. Also, funding across those sources includes funding for non-historic buildings. We recently reported that GSA has identified a $4.6 billion maintenance and repair liability (i.e., needed projects) for its federally owned real property over the next 10 years, which includes both historic and non- historic buildings. According to GSA officials, its historic buildings require comparatively more maintenance and repair work than its non-historic buildings. We also reported that the annual funding Congress has made available to GSA for obligation from the Federal Buildings Fund has trended downward in recent years, and much of this reduction has been absorbed by the repairs and alterations funding account, meaning that GSA has reduced its spending on repairs and alterations. According to GSA officials, the constrained federal budget and competing project demands have affected GSA’s ability to complete historic building modernizations. For example, GSA allocated $162 million in Recovery Act funding to undertake the first phase in its renovation of half of the 95- year old GSA headquarters building in Washington, D.C.; however, GSA does not have funding to complete the second phase of the project. Similarly, the final phase of GSA’s 12-year, six-phase modernization to the Department of the Interior’s headquarters building in Washington, D.C., has been delayed pending funding availability. GSA officials reported that because of shrinking budgets and increasing reinvestment needs within GSA’s real property portfolio, GSA evaluated the risks of delaying various projects, and determined that delaying the completion of the GSA headquarters building project posed a lower risk as compared with more critical projects. GSA officials further reported that GSA’s fiscal year 2013 construction program incorporates OMB’s response to the nation’s economic distress by including fewer projects and focusing on critical needs such as safety improvements. According to NPS’s fiscal year 2013 budget justification, less than 60 percent of its historic buildings and structures are in good condition. NPS headquarters officials report that limited funding is the greatest challenge NPS faces in maintaining its historic buildings. We found that some NPS sites have experienced maintenance staffing reductions as NPS has faced declining operating budgets. For example, the maintenance unit that jointly serves the Fort McHenry National Monument and Historic Shrine and Hampton National Historic Site, both located in Maryland, has been reduced from 15 to 10 positions over the last 3 years. One NPS official commented that staffing reductions pose a risk to historic buildings because maintenance projects may get deferred, which can lead to such projects needing to be addressed later as larger, bundled, and more costly capital projects. Delaying maintenance projects may also cause irreversible damage to historic buildings, according to the NPS official. In reviewing NPS’s fiscal year 2013 budget justification, we found that NPS has requested $96.3 million for its cyclical maintenance program, aimed at conducting preventative maintenance on a predictive cycle to keep buildings—both historic and non-historic—in acceptable condition. However, NPS’s fiscal year 2013 budget justification also shows that its annual cyclical maintenance requirements for its buildings exceed $450 million. Since NPS’s budget request is substantially less than its stated requirement, it is likely that some maintenance projects will be deferred. In June 2012, VA reported to us that much of its inventory is over 50 years old, with an average building age of 57 years. In addition to the age of its buildings, VA reported that many of those buildings have been designated as historic and many are in poor condition. In VA’s fiscal year 2013 budget justification, 18 of 21 Veteran Integrated Service Networks (i.e., regional administrative offices for their hospitals, comprised of multiple hospital campuses) reported that, in general, their aging and historic buildings are a significant infrastructure challenge because of the poor condition of many of the buildings, the functional limitations of some historic buildings, and stakeholder interests about rehabilitating, reusing or disposing of historic buildings. VA’s fiscal year 2013 budget request included $1.1 billion for major and minor construction projects, which includes funding for rehabilitation of both non-historic and historic buildings. The budget request also included $712 million to fund non-recurring maintenance requirements in VA’s existing buildings, including historic buildings. The latter includes funding for repairs and life-cycle projects, such as modernizing mechanical or electrical systems and replacing windows and roofs. However, VA’s budget request also shows it would need over $9 billion to adequately address the condition deficiencies in its buildings. One VA official indicated that VA expects to seek funding in future budget requests, for projects to correct those deficiencies. We have reported that in addition to Congress, OMB, and real property- holding agencies, several other stakeholders have an interest in how the federal government carries out its real property acquisition, management, and disposal practices. In the case of historic buildings, these stakeholders may include, but are not limited to, state, local, and tribal governments; business interests in the local communities; historic preservation groups; and the general public. For example, in the case of VA, veterans’ organizations have had an interest in being consulted on VA’s plans to reuse or demolish its historic buildings and how those plans affect the services provided to veterans. Competing interests over how to reuse a historic building, or whether to demolish a building, may arise between an agency and its stakeholders. As a result, final decisions about a property may reflect broader stakeholder considerations which may not necessarily align with what an agency views as the most cost- effective or efficient alternative. Stakeholders for historic buildings include state historic preservation officers and ACHP, both of which have a role in consulting and advising federal agencies on preservation, repair, or alteration of historic buildings. For example, in 2011, the California state historic preservation officer was concerned that VA had not solicited his office’s consultation on some projects at the San Francisco VA Medical Center, which included the development of a master plan to address the campus’ future needs and its effect on the site’s historic buildings. We found that VA is reexamining its master plan—and the extent of new construction proposed on the campus—to try to address concerns raised, in part, by the California state historic preservation officer. According to ACHP officials, preservation organizations, such as the National Trust for Historic Preservation, and members of the public may also have an interest in being consulted about decisions affecting federal historic buildings. Among the buildings we reviewed, we found examples where competing stakeholder interests have affected the preservation, reuse, or lease of historic buildings. For example, VA attempted to use its enhanced-use lease authority to enter into a long-term public-private partnership with a non-federal entity for the use of some of VA’s historic buildings in Milwaukee, Wisconsin. However, public stakeholder groups raised concerns about aspects of the lease proposal and the related construction plans for a high-tech business park, which contributed to the failure of the Milwaukee proposal. We have previously reported on VA’s challenges with its non-federal stakeholders when trying to implement plans to repurpose some historic buildings. The failure of enhanced-use lease negotiations in Milwaukee has, in part, contributed to the failure to find a suitable use for one building in particular, the “Old Main” hospital building, which is in an advanced state of deterioration. During our site visit we observed that the roof was partially collapsed. ACHP officials said that VA funding for maintenance of the building was severely limited over many years and that VA’s medical center staff lacked a familiarity with the historic preservation review process, which contributed to the building’s current condition. In 2011, VA and ACHP initiated consultations—which included NPS, the state historic preservation officer, veterans’ groups, and others—to discuss planned projects on the historic campus to include a project to stabilize Old Main from further collapse. The work to stabilize Old Main began in September 2012. The medical center director told stakeholders that VA will continue to seek alternative uses for Old Main. See figure 5 for representative views of the exterior and interior of the building. Compounding the challenges the three agencies face in managing their historic buildings, we also found that the data the three agencies reported on historic buildings were not complete or consistent. FRPP was intended to provide a comprehensive database of federal buildings, including identifying historic buildings, but data collection and control issues have hindered the reporting of complete and consistent data. However, if data reported to FRPP were improved, FRPP could be used as a vehicle to strategically manage and oversee the government’s historic buildings. Under Executive Order 13287, agencies are required to report to ACHP on their progress in identifying, protecting, and using historic buildings and other properties, as well as their condition. ACHP consolidates agency-reported data every 3 years into the Preserve America report on the state of the federal government’s historic properties. The three selected agencies, however, did not report consistent information on their historic properties. For example, in 2011, GSA reported to ACHP the total number of historic buildings it evaluated and nominated to the National Register over the last 3 years, but NPS did not report how many additional buildings achieved historic status and were listed on the National Register within the reporting period. Also, according to an ACHP official, VA did not meet ACHP’s fiscal year 2011 reporting requirement because of VA’s internal review processes. Therefore, ACHP could not report the number of historic buildings held by VA in its recent Preserve America report. As noted, NPS manages the National Register. Historic property records are listed on the National Register after a building is nominated by an agency and the NPS agrees with the agency’s determination that the building meets the historic designation criteria. The National Register is intended to be an authoritative guide and planning tool to identify properties agencies should consider for protection and, before undertaking a project related to such properties, to provide ACHP, state historic preservation officers, and other stakeholders a reasonable opportunity to comment on the project. According to GSA and VA officials, the data in the National Register are not complete. Specifically, GSA and VA are still working to complete backlogs of National Register nominations to fully report on historic buildings that the federal government owns. GSA still needs to conduct evaluations on 5 percent of its buildings over 50 years old, and VA is still conducting evaluations for 30 of its medical campuses, which encompass hundreds of buildings, built approximately between 1918 and 1960. Further, according to the official that manages the National Register, listings are typically not updated to denote the federal agency currently responsible for a building or a non-federal entity if the building were sold or transferred. Rather, listings generally reflect the federal agency that was responsible for the building at the time of its nomination. Executive Order 13327 directs the Administrator of GSA, in consultation with FRPC, to establish and maintain a database (which became the FRPP) and to establish data and information technology standards to facilitate reporting on a uniform basis. In June 2012, we reported that data elements in the FRPP database are not always defined and reported consistently and accurately. In our current review of agencies’ FRPP historic building data, we found that the historic status of over 75 percent and 63 percent of GSA and NPS buildings, respectively, are categorized as “not evaluated” in FRPP. See table 1 below for examples of inconsistencies we identified. A noteworthy example is the West Wing of the White House, which is a GSA-held property that was designated a national historic landmark in 1960, but is listed as “not evaluated” in FRPP. In addition, FRPP data for NPS in fiscal year 2011 showed that NPS had almost 1,500 national historic landmark buildings, while its reporting to ACHP in 2011 indicates NPS had 177 national historic landmarks. These data inconsistencies are because of FRPP’s lack of a status code for buildings that are “contributing” elements in a national historic landmark (or National Register listed) site or district. We found that in internal data on historical buildings, NPS categorizes many buildings as “contributing” historic buildings. For example, the Frederick Law Olmstead House in Brookline, Massachusetts, is reported as a single national historic landmark site on the National Register, but is comprised of five buildings including a shed and barn. While the house is categorized as “nationally significant,” the historic significance of the shed and barn are categorized as “contributing.” In 2009, the Department of the Interior recommended that FRPC add a new “contributing” category within FRPP coding options for historic status to better clarify how many buildings have actually been designated as historic, particularly for those within national historic landmark and National Register listed sites and districts. FRPC did not implement the recommendation, in part, because it was considered too specific to the Department of the Interior’s portfolio. However, we also found this recommendation to have relevance to many of the historic buildings reported by VA and GSA. For example, in the case of VA, most of its historic buildings are not individually listed to the National Register, but are considered to be included within a historic site or district. Therefore, the lack of a “contributing” category makes it unclear whether historic building status data are being reported consistently or accurately across agencies and whether the executive branch can reliably identify the total number of historic buildings it holds, and also distinguish those that are exceptionally significant to the nation and commensurate with national historic landmark status. GSA and NPS historic building officials also stated that FRPP data on the number of historic buildings they hold are inconsistent with data maintained within their internal historic buildings databases. The agencies have not fully reconciled their historic building databases with their real property databases, the latter of which are used to report to FRPP. FRPP data, therefore, cannot be used to assess the numbers of federal buildings individually designated or account for those that are “contributing buildings” within a larger historic site or district. Lastly, ACHP officials told us that they do not have access to FRPP data—despite their request to GSA—to conduct analyses on federal historic buildings. We found that FRPC allows access to agency FRPP data only when access is granted by individual agency data administrators. ACHP officials also noted it is difficult for ACHP to draw quantifiable summary data from individual agencies’ Preserve America submissions because agencies inconsistently report historic building data. ACHP officials further indicated that ACHP would like some consistency in how historic building data are reported and said the responsibility is on individual agencies to report uniform data as was intended by FRPP. The FRPP database’s lack of completeness and consistency for historic data are not consistent with sound data collection practices. We have long held that results-oriented organizations assure that data they collect are sufficiently complete, accurate, and consistent enough to document performance and support decision making, as well as to collaborate with others who would benefit from the data. In June 2012, we reported that FRPC has not followed sound data collection practices in designing and maintaining the FRPP database, raising concern that the data are not a useful tool for describing the nature, use, and extent of excess and underutilized federal real property. For example, FRPC has not ensured that key data elements—including buildings’ utilization, condition, annual operating costs, mission dependency, and replacement value—are defined and reported consistently and accurately. As a result, we recommended that GSA, in collaboration with FRPC member agencies, develop and implement an action plan to improve FRPP, consistent with sound data collection practices, so that data collected are sufficiently complete, accurate, and consistent, and collaboration between agencies is effective. In addition to developing the database, Executive Order 13327 required FRPC to be a clearinghouse of best practices for real property management and establish performance measures to determine the effectiveness of federal real property management. The executive order specifically states that performance measures shall be designed “to enable the heads of executive branch agencies to track progress in the achievement of government-wide property management objectives, as well as allow for comparing the performance of executive branch agencies against industry and other public sector agencies.” FRPP has four data elements that FRPC considers performance measures: (1) utilization (overutilized, utilized, underutilized, and not utilized); (2) condition index (general measure of the constructed asset’s condition at a specific point in time); (3) annual operating costs (expenses for recurring maintenance and repair costs, utilities, cleaning or janitorial costs, and roads or grounds expenses); and (4) mission dependency (the importance an asset brings to the performance of the agency’s mission). However, we reported in June 2012 that these performance measures are ineffective because they are not routinely linked to any performance goals, and FRPC guidance does not explain what constitutes acceptable performance on these measures. These performance measures, if tied to performance goals, could potentially be useful to the agencies and OMB in the area of historic preservation, where, OMB and FRPC could benchmark performance across agencies and, potentially identify progress, areas of concerns, or lessons learned. For example, of the three agencies we reviewed, only NPS had a performance metric reported within its fiscal year 2013 budget justification that showed the number and percentage of historic buildings in good condition for prior fiscal years, the planned goal for the fiscal year 2012, and a proposed goal for fiscal year 2013. Furthermore, data on mission-dependent buildings with historic status could help agencies justify funding requests for those buildings’ maintenance and rehabilitation, and historic buildings with high operating costs could be candidates for sustainability investment. The data could also show how the condition of mission-dependent historic buildings may either be adversely or positively affected given reduced or increased funding scenarios. Lastly, complete and accurate data identifying agencies’ historic buildings would be needed to assess whether agencies have managed and provided for those buildings’ continued use or have pursued leases with persons or organizations to provide for the buildings’ reuse and preservation as called for by NHPA. In June 2012, we recommended that FRPP performance measures should be linked to clear performance goals consistent with Executive Order 13327. GSA agreed with the recommendation and has begun to take action to address the recommendation. GSA also reported that it will propose refining the performance measures and limit the number of measures to ensure that only essential measures, linked to performance goals, are collected consistent with directives in Executive Order 13327. Improving historic building data could complement this recommendation, not only at the agency-level for GSA, NPS, and VA, but for all executive agencies included in FRPP and in GSA’s capacity as the agency responsible for establishing and maintaining FRPP. Federal real property management is a challenging area, and requirements to preserve and manage historic buildings place added expectations on federal agencies that are stewards of many treasured assets. NHPA, executive orders, and other requirements establish the overall federal policy regarding historic buildings. Those requirements reflect that federal historic buildings are an important part of America’s heritage and should be preserved, protected, enhanced, and where possible, adapted for contemporary use. In 2004, the President issued an executive order establishing the FRPC and requiring GSA to collect data from executive branch agencies describing the nature, extent, and use of federal real property. While this was a positive step, we reported in June 2012 that GSA needed to improve FRPP so that data are consistent, complete, and collaboration among agencies is effective. GSA agreed with this recommendation and is taking steps to implement it. In the case of historic building data, FRPP data are similarly limited, which negates the potential for agencies and stakeholders, including OMB and Congress, to use the data to strategically manage the historic building subset of the federal real property portfolio and try to address related challenges. Focusing on how to improve the historic building data in FRPP, in conjunction with GSA efforts under way to improve FRPP data on all federal buildings, would better equip stakeholders to make decisions about where to direct limited federal resources for historic preservation and foster greater accountability and transparency. We recommend that the Acting Administrator of GSA—in collaboration and consultation with ACHP, NPS, VA, and FRPC member agencies— ensure that the action plan being developed to improve FRPP data includes actions to improve historic-building data by addressing the following areas, at a minimum: determining whether changes are needed—to historic data elements or guidance—to ensure that data are consistently and accurately reported; developing, in FRPP fiscal-year summary reports, data that will better convey to the public and stakeholders—including OMB and Congress—a sense of the extent of historic buildings held by agencies, such as total numbers or percentages; and, facilitating ACHP’s access to FRPP data, as appropriate, so that ACHP can better fulfill its historic-building advisory role to Congress and the President. We provided a draft of this report to GSA, the Department of the Interior, VA, and ACHP for review and comment. GSA agreed with our recommendation and further reported that it has, in part, already taken action to rectify inconsistencies we found between GSA’s FRPP data and its internal data sources used for NHPA compliance and reporting required by Executive Order 13287. GSA also indicated it will: (1) assess and determine whether changes are needed to the FRPP historic status data element and guidance; (2) provide future data on historic buildings within GSA’s FRPP fiscal-year summary reports; and, (3) work with ACHP by sharing FRPP data and reports, as appropriate. GSA’s response is reprinted in appendix III. The Department of the Interior and VA provided technical comments, which we incorporated as appropriate. In commenting on our draft, ACHP said this report provides a useful analysis of these agencies’ programs and their stewardship of historic buildings. However, ACHP emphasized that the greatest challenge facing agencies’ federal buildings—irrespective of historic status—is that their buildings have not been maintained because of agencies’ decisions to defer maintenance. While we understand ACHP’s perspective, our assessment pointed to budgetary limitations as a challenge that can cause agencies to defer maintenance on historic buildings. Nonetheless, in recognition of ACHP’s comment, we have included a reference to GAO’s past reporting on agencies’ deferred maintenance backlogs for federal buildings and have noted the link between budgetary limitations and deferred maintenance, in the report. It is also important to note that, in addition to budgetary limitations, as discussed in the report, the agencies face challenges in adapting some historic buildings to meet contemporary needs and in involving stakeholders on proposals—such as building reuse and demolition plans—that may adversely affect a historic building. ACHP’s response is reprinted in appendix IV. ACHP also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Acting Administrator of GSA, the Secretaries of the Interior and VA, and the Executive Director of ACHP. Additional copies will be sent to interested congressional committees. We will also make copies available to others upon request, and the report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-2834 or wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix V. Our objectives were to identify (1) actions selected nondefense agencies have taken to manage historic federal buildings, and (2) any challenges they have faced. We identified three agencies for our review: (1) the General Services Administration (GSA); (2) the National Park Service (NPS) within the Department of the Interior; and (3) the Department of Veterans Affairs (VA). We selected GSA, in part, because it is the federal government’s principal real property steward for many federal agencies. In addition, GSA was selected because it also maintains the Federal Real Property Profile (FRPP) database that is used to identify and report on federal real property, to include the historic status of federal buildings, held by executive branch agencies. We selected NPS, in part, because it is responsible for the stewardship of most of the Department of the Interior’s historic buildings, many of which reside within the nation’s national historic parks, districts, and sites. In addition, we selected NPS because it establishes and manages the nation’s historic preservation standards such as the Secretary of the Interior’s: (1) Standards for the Treatment of Historic Properties with Guidelines for Preserving, Rehabilitating, Restoring, and Reconstructing Historic Buildings; (2) Standards for Rehabilitation & Illustrated Guidelines on Sustainability for Rehabilitating Historic Buildings; and (3) Standards and Guidelines for Federal Agency Historic Preservation Programs Pursuant to the National Historic Preservation Act. In addition, on behalf of the Secretary of the Interior, NPS manages the National Register of Historic Places, the nation’s official list of historic places, both publically and privately held. Lastly, we sought and received recommendations from the Advisory Council on Historic Preservation (ACHP) on a third federal agency to add to our review. Based on recommendations received from ACHP and our judgment and knowledge of the inventories of the agencies recommended, we selected VA because it is the steward for the nation’s historic veterans’ hospitals and many of its buildings are over 50 years old. In finalizing our selection, we estimated that the federally owned building portfolios of the three non-defense agencies encompassed over 33,500 buildings, including at least 10,000 identified historic buildings (based on our preliminary review of those agencies’ FRPP 2010 fiscal year data) and that these agencies’ real property portfolios would provide a diverse range of building types including office buildings, courthouses, park facilities, museums, and hospitals. To understand the issues and requirements related to federal historic preservation, we reviewed relevant laws, regulations, and executive orders governing how agencies should identify, report, and manage historic buildings in their portfolios. We reviewed NHPA and its implementing regulations as well as the following executive orders: (1) Executive Order 13287, entitled Preserve America; (2) Executive Order 13327, entitled Federal Real Property Asset Management; and (3) Executive Order 13514, entitled Federal Leadership in Environmental, Energy, and Economic Performance. Executive Order 13514 requires agencies, among a number of initiatives, to improve the sustainable performance of their existing buildings and ensure that all new construction, major renovation, or repair and alteration of federal buildings complies with the Guiding Principles for Federal Leadership in High Performance and Sustainable Buildings (Guiding Principles). We reviewed those Guiding Principles, as well as ACHP’s sustainability guidance to federal agencies entitled Sustainability and Historic Federal Buildings, and The Secretary of the Interior’s Illustrated Guidelines on Sustainability for Rehabilitating Historic Buildings. In addition, we reviewed the aforementioned federal standards and guidelines on historic preservation managed by NPS on behalf of the Secretary of the Interior. We also reviewed past GAO reports on federal real property. To understand the challenges faced by agencies in managing their historic buildings and to identify agencies’ portfolio-wide efforts to preserve their historic buildings, we interviewed our selected agencies’ real property and preservation officials about their agencies’ preservation programs. We also reviewed agencies’ fiscal year 2013 budget requests, but we did not independently verify agencies’ fiscal year 2013 budget requirements. To gather detailed examples of selected agencies’ actions to manage historic buildings, we visited a nonprobability sample of 31 federally owned historic buildings held by GSA, NPS, and VA, in five metropolitan areas: (1) Boston, Massachusetts; (2) Chicago, Illinois; (3) Milwaukee, Wisconsin; (4) San Francisco, California; and (5) Washington, D.C.-Baltimore, Maryland. We selected buildings to visit based on a combination of input received from agencies’ preservation officials, ACHP, state historic preservation officers, and our own judgment as informed by our review of selected agencies’ preservation documents and their FRPP submissions. This approach yielded a diverse group of buildings in terms of use, age, size, and condition that provided illustrative examples of agencies’ broader policy initiatives and specific preservation and sustainability projects. Prior to our site visits, we reviewed selected agencies’ documentation on recent, current, or planned efforts to manage those historic buildings. To the extent documents were available, we requested and reviewed National Register nomination forms, facility condition assessments, historic structure reports, and sustainability scorecards for buildings we visited. In preparation for our visits, we also provided agency officials knowledgeable about our selected nonprobability sample of buildings with a series of questions, and asked for written responses, regarding their historic preservation, maintenance and repair, and sustainable improvements, if any. For example, we inquired about the current building condition and asked the officials to identify specific major renovation, rehabilitation, or sustainability projects that were undertaken in the buildings within the last 3 fiscal years, if any. While we reviewed projects that were in-process or recently completed in those buildings, we did not review the extent projects were within scope, cost, and schedule. We also inquired about agencies’ progress in meeting government-wide sustainable Guiding Principles for their existing buildings, and specifically if the buildings currently met the Guiding Principles. We also conducted a literature review about historic preservation and sustainability. Finally, to better understand the challenges that agencies may face, we spoke with three experts with academic or professional expertise in improving the sustainable performance of historic buildings. We selected these experts based on a combination of input received from the selected agencies’ preservation officials, ACHP, and our own judgment as informed by our literature search. To determine whether FRPP could be used to reliably identify the historical status of selected agencies’ federally owned buildings and the numbers of historic buildings held by those agencies, we reviewed GSA’s annual FRPP reporting guidance to executive branch agencies on how to report their historic buildings. We also obtained and analyzed selected agencies’ FRPP data submissions for fiscal year 2011, and other real property data such as agencies’ past and current Preserve America reports submitted to ACHP about the agencies’ historic buildings and their efforts to preserve those buildings. We also interviewed agency officials about data and reviewed FRPP guidance and other documents related to the agencies’ real property data and FRPP database. In the case of GSA and NPS—which maintain separate historic building databases in addition to their respective agencies’ real property databases—we obtained and reviewed data from those agencies respective historic buildings databases about the historic status of their buildings and compared it with data agencies reported to FRPP and in their reports to ACHP. We conducted this performance audit from November 2011 to December 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individuals named above, other key contributors to this report were David Sausville, Assistant Director; John Bauckman, Analyst- in-Charge; Lindsay Bach; Leia Dickerson; Colin Fallon; Catherine Kim; Hannah Laufe, Joshua Ormond; Crystal Wesco; and Elizabeth Wood. | The federal government has made some progress addressing previously identified issues with managing federal real property. This includes establishing the Federal Real Property Council (FRPC) and creating the FRPP database to identify and report agencies' real property, including attributes such as historic status. GAO was asked to assess issues related to historic preservation at nondefense agencies. GAO's review focused on-- GSA, NPS, and VA--three nondefense agencies that hold significant numbers of historic buildings. This report identifies (1) actions these agencies have taken to manage historic federal buildings, and (2) any challenges they have faced. GAO selected and visited a sample of 31 historic buildings managed by the three agencies. The results of these site visits cannot be generalized but provide important insights. GAO interviewed agency officials and reviewed agencies' efforts to preserve, use or lease, and improve the sustainable performance of those buildings. GAO also interviewed officials from the selected agencies about their agencies' preservation programs, including actions to identify and report on their historic buildings. The General Services Administration (GSA), the National Park Service (NPS), and the Department of Veterans Affairs (VA) have undertaken portfolio-wide efforts in recent years to identify historic buildings they hold, nominate some of those buildings to the National Register of Historic Places, and manage their historic buildings in an effort to comply with the requirements in the National Historic Preservation Act (NHPA) and relevant executive orders. While these agencies use and preserve some of their historic buildings to meet mission needs, others are excess or unsuited for current mission needs. GAO found several instances in which these agencies leased part or all of some historic buildings to non-federal entities that could use and preserve the buildings. GAO also found that these agencies had implemented projects in some of their historic buildings to improve their sustainable performance, such as installing green roofs and energy-efficient heating and cooling systems. GSA, NPS, and VA face an array of challenges in managing historic buildings, including functional limitations of older buildings in relation to contemporary mission needs and current building codes, budgetary limitations, and competing stakeholder interests. Competing stakeholder interest can become apparent during the required consultation with stakeholders, such as the state historic preservation officers, prior to implementing projects that may affect a historic building. Compounding these property management challenges, the selected agencies' data on historic buildings in the Federal Real Property Profile (FRPP) are not complete. GSA and VA are still working to evaluate many buildings that are over 50 years old. Also, GSA and NPS have not reported complete and consistent historic-building data to the FRPP--in comparison to data they track within their agencies' historic-building databases. GAO reported its concerns with the reliability of FRPP data in 2012. This review emphasizes the relevance of these concerns to the historic-building data included in the database. In June 2012, GAO recommended improvements to the FRPP database to enhance its consistency, completeness, and usefulness in federal decision making. Such improvements are also necessary to increase the consistency and completeness of historic-building data in the FRPP. GSA--in collaboration and consultation with NPS, VA, and FRPC member agencies, and others--should ensure that the action plan being developed to improve FRPP also addresses the need for improved data on historic buildings. GSA agreed with GAO's recommendation and further reported that it has, in part, already taken action to rectify inconsistencies GAO found between GSA's FRPP data and its internal data sources. |
FAA is responsible for ensuring safe, orderly, and efficient air travel in and around the United States. NWS supports FAA by providing aviation-related forecasts and warnings at air traffic facilities across the country. Among other support and services, NWS provides four meteorologists at each of FAA’s 21 en route centers to provide on-site aviation weather services. This arrangement is defined and funded under an interagency agreement. In performing its primary mission to ensure safe air travel, FAA reported that air traffic in the national airspace system exceeded 43 million flights and 745 million passengers in 2008. In addition, at any one time, as many as 7,000 aircraft—both civilian and military—could be aloft over the United States. In 2004, FAA’s Air Traffic Organization was formed to, among other responsibilities, improve the provision of air traffic services. More than 34,000 employees within FAA’s Air Traffic Organization support the operations that help move aircraft through the national airspace system. The agency’s ability to fulfill its mission depends on the adequacy and reliability of its air traffic control systems, as well as weather forecasts made available by NWS and automated systems. These resources reside at, or are associated with, several types of facilities: air traffic control towers, terminal radar approach control facilities, air route traffic control centers (en route centers), and the Air Traffic Control System Command Center. The number and functions of these facilities are as follows: 510 air traffic control towers manage and control the airspace within about 5 miles of an airport. They control departures and landings, as well as ground operations on airport taxiways and runways. 163 terminal radar approach control facilities provide air traffic control services for airspace within approximately 40 miles of an airport and generally up to 10,000 feet above the airport, where en route centers’ control begins. Terminal controllers establish and maintain the sequence and separation of aircraft. 21 en route centers control planes over the United States—in transit and during approaches to some airports. Each center handles a different region of airspace. En route centers operate the computer suite that processes radar surveillance and flight planning data, reformats the data for presentation purposes, and sends it to display equipment used by controllers to track aircraft. The centers control the switching of voice communications between aircraft and the center, as well as between the center and other air traffic control facilities. Four of these en route centers also control air traffic over the oceans. The Air Traffic Control System Command Center manages the flow of air traffic within the United States. This facility regulates air traffic when weather, equipment, runway closures, or other conditions place stress on the national airspace system. In these instances, traffic management specialists at the command center take action to modify traffic demands in order to keep traffic within system capacity. See figure 1 for a visual summary of the facilities that control and manage air traffic over the United States. The mission of NWS—an agency within the Department of Commerce’s National Oceanic and Atmospheric Administration (NOAA)—is to provide weather, water, and climate forecasts and warnings for the United States, its territories, and its adjacent waters and oceans to protect life and property and to enhance the national economy. In addition, NWS is the official source of aviation- and marine-related weather forecasts and warnings, as well as warnings about life-threatening weather situations. The coordinated activities of weather facilities throughout the United States allow NWS to deliver a broad spectrum of climate, weather, water, and space weather services in support of its mission. These facilities include 122 weather forecast offices located across the country that provide a wide variety of weather, water, and climate services for their local county warning areas, including advisories, warnings, and forecasts; 9 national prediction centers that provide nationwide computer modeling to all NWS field offices; and 21 center weather service units that are located at FAA en route centers across the nation and provide meteorological support to air traffic controllers. As an official source of aviation weather forecasts and warnings, several NWS facilities provide aviation weather products and services to FAA and the aviation sector. These facilities include the Aviation Weather Center, weather forecast offices located across the country, and 21 center weather service units located at FAA en route centers across the country. The Aviation Weather Center located in Kansas City, Missouri, issues warnings, forecasts, and analyses of hazardous weather for aviation. Staffed by 55 personnel, the center develops warnings of hazardous weather for aircraft in flight and forecasts of weather conditions for the next 2 days that could affect both domestic and international aviation. The center also produces a Collaborative Convective Forecast Product, a graphical representation of expected thunderstorms or related conditions at 2, 4, and 6 hours. This is used by FAA to manage aviation traffic flow across the country. The Aviation Weather Center’s key products are described in table 1. NWS’s 122 weather forecast offices issue terminal area forecasts for approximately 632 locations every 6 hours or when conditions change, consisting of the expected weather conditions significant to a given airport or terminal area, and are primarily used by commercial and general aviation pilots. The terminal area forecasts are updated every 3 hours for 35 key airports and every 2 hours for the airports in New York, Atlanta, and Chicago. NWS’s center weather service units are located at each of FAA’s 21 en route centers and operate 16 hours a day, 7 days a week (see fig. 2). Each center weather service unit usually consists of three meteorologists and a meteorologist-in-charge who provide strategic advice and aviation weather forecasts to FAA traffic management personnel. Governed by an interagency agreement, FAA currently reimburses NWS approximately $13 million annually for this support. The meteorologists at the center weather service units use a variety of systems to gather and analyze information compiled from NWS and FAA weather sensors. Key systems used to compile weather information include FAA’s Weather and Radar Processor, FAA’s Integrated Terminal Weather System, FAA’s Corridor Integrated Weather System, and a remote display of NWS’s Advanced Weather Interactive Processing System. Meteorologists at several center weather service units also use NWS’s National Centers—Advanced Weather Interactive Processing System. Table 2 provides a description of key systems. NWS meteorologists at the en route centers provide several products and services to the FAA staff, including meteorological impact statements, center weather advisories, periodic briefings, and on-demand consultations. These products and services are described in table 3. In addition, center weather service unit meteorologists receive and disseminate pilot reports, provide input every 2 hours to the Aviation Weather Center’s creation of the Collaborative Convective Forecast Product, train FAA personnel on how to interpret weather information, and provide weather briefings to nearby terminal radar approach control facilities and air traffic control towers. In recent years, FAA has undertaken multiple initiatives to assess and improve the performance of the center weather service units. Studies conducted in 2003 and 2006 highlighted concerns with the lack of standardization of products and services at NWS’s center weather service units. To address these concerns, the agency sponsored studies that determined that weather data could be provided remotely using current technologies, and that private sector vendors could provide these services. In 2005, the agency requested that NWS restructure its aviation weather services by consolidating its center weather service units to a smaller number of sites, reducing personnel costs, and providing products and services 24 hours a day, 7 days a week. NWS subsequently submitted a proposal for restructuring its services, but FAA declined the proposal citing the need to refine its requirements. In December 2007, FAA issued revised requirements and asked NWS to respond with proposals defining the technical and cost implications of three operational concepts. The three concepts involved (1) on-site services provided within the existing configuration of offices located at the 21 en route centers, (2) remote services provided by a reduced number of regional facilities, and (3) remote services provided by a single centralized facility. NWS responded with three proposals, but FAA rejected these proposals in September 2008, noting that while elements of each proposal had merit, the proposed costs were too high. FAA requested that NWS revise its proposal to bring costs down while stating a preference to move toward a single center weather service unit with a backup site. As a separate initiative, NWS began a series of improvements in order to address FAA’s key concerns. Specifically, in April 2008, the agency initiated a program to improve the consistency of the center weather service units’ products and services. This program involved standardizing the technology, collaboration, and training for all 21 center weather service units and conducting site visits to evaluate and provide feedback to each unit. NWS reported that it completed these efforts in 2009. A summary of FAA’s key concerns and NWS’s efforts to address them is included in appendix II. After two requests for deadline extensions on a new proposal, NWS provided FAA with an updated proposal in June 2009 based on the two-site approach FAA had requested in September 2008. FAA responded to NWS’s proposal by requesting more information and stated that the agencies would work together to resolve issues. From September through November 2009, the agencies completed a series of meetings to address issues from the proposal and agreed that NWS would resubmit its proposal in December 2009 to consolidate the service units. In December 2009, FAA revised its requirements to reflect the agencies’ efforts aimed at improving center weather service operations. However, NWS did not submit its proposal in December 2009 to consolidate the center weather service units. According to NWS officials, they decided not to submit the proposal because (1) the NWS labor union and others raised concerns about consolidating offices, (2) NWS could implement technical improvements more quickly under the current organizational structure, and (3) the agency wanted to focus its efforts and resources on future weather system development rather than restructuring existing operations. Table 4 provides a chronology of the agencies’ assessment and improvement efforts. In January 2008, we reported on concerns about inconsistencies in products and quality among center weather service units. We noted that while both NWS and FAA have responsibilities for assuring and controlling the quality of aviation weather observations, neither agency monitored the accuracy and quality of the aviation weather products provided at center weather service units, performed annual evaluations of aviation weather services provided at en route centers, and provided feedback to the center weather service units. We recommended they do so. The Department of Commerce agreed with our recommendations, and the Department of Transportation stated that FAA planned to revise its requirements and that these would establish performance measures and evaluation procedures. In September 2009, we reported that the agencies were considering plans to consolidate 20 of the 21 existing center weather service units to two locations, but it was not clear whether and how the changes would be implemented. Moreover, we reported that NWS and FAA faced challenges in their efforts to improve the aviation weather structure, including achieving interagency collaboration, defining FAA’s requirements, and aligning any changes with the Next Generation Air Transportation System. We also identified three challenges the agencies would face in implementing their plans—developing a feasible schedule that includes adequate time for stakeholder involvement, undertaking a comprehensive demonstration to ensure no services are degraded, and effectively reconfiguring the infrastructure and technologies. We recommended that the agencies address these challenges, and NOAA and the Department of Transportation agreed with our recommendations. After developing and shelving four different proposals for restructuring the center weather service units over the last 5 years, NWS and FAA have reached agreement on how to improve aviation weather services. In March 2010, NWS proposed maintaining the current 21 center weather service units collocated at en route centers, increasing staffing at the Aviation Weather Center in order to provide remote services during the service units’ off-hours, and developing a new collaborative weather product. NWS estimated that these improvements would cost FAA about $3 million per year. This is in addition to the annual cost of maintaining the existing 21 centers. NWS also estimated that it would be able to implement the proposal within 21 months. FAA responded that it was not prepared to accept the proposal because of the increased costs. Subsequently, in July 2010, FAA and NWS reached an agreement on the steps the two agencies would take to improve aviation weather services. Specifically, FAA proposed and NWS agreed to continue the current center weather service units at each of the 21 en route centers through September 2011 and to take immediate steps to improve aviation weather services by (1) having the service units provide forecasts at 10 key FAA terminal radar approach control facilities and (2) providing around-the- clock coverage at all of the en route centers by having the local weather forecast office support the en route centers when the center weather service units are closed for the night—a practice that currently is used at selected en route centers. In addition, the agencies agreed to establish a joint team to baseline current capabilities and develop firm requirements for NWS products and services supporting FAA’s air traffic flow management out through 2015. The agencies expect that the joint team will establish an implementation plan by November 2010 and then begin to implement it. However, the agencies’ documentation of this agreement does not address the future locations of the center weather service units, or provide details and a schedule for the proposed improvements to services. As a result, it is not clear what will happen to the 21 service units after September 2011, when the immediate improvements in services will be in place, whether there are any costs associated with these steps, whether the benefits outweigh the costs, and who will pay for them. Until this agreement is further defined in writing and formalized between the two agencies, the risks remain that the agencies will misjudge their responsibilities and not fulfill their agreements. According to best practices in the federal government and in industry, organizations should measure performance in order to evaluate the success or failure of their activities and programs. Performance measurement involves identifying performance goals and measures, establishing performance baselines by tracking performance over time, identifying targets for improving performance, and measuring progress against those targets. In January 2008, we recommended that NWS and FAA develop performance measures and track metrics for the products and services provided by center weather service units and that they provide feedback to the center weather service units so that they could improve their performance. Further, in September 2009, we recommended that the agencies approve their draft performance measures and establish performance baselines so that they could understand the effects of any changes from restructuring aviation weather services. Over the past year, NWS has made progress in identifying performance measures, tracking performance on selected measures, and reporting on the selected measures; however, the agency is not yet tracking or reporting on all applicable performance measures. In December 2008, FAA provided NWS five performance measures of center weather service unit performance. Under the current interagency agreement, NWS is required to track and report to FAA on these measures. In addition, in its last two proposals, NWS proposed additional measures, two of which could be tracked under the current organizational structure and using current products. We previously recommended that NWS immediately identify the current level of performance of the proposed measures that could be identified under the current organizational structure, so that they will have a performance baseline to compare to should they decide to implement operational changes. The agency agreed with this recommendation. Table 5 describes the performance areas applicable to the current center weather service unit structure. NWS has started tracking performance for three of the seven measures and is partially tracking a fourth. Specifically, NWS has tracked data on each center weather service unit’s (1) participation in the Collaborative Convective Forecast Product, (2) organizational service provision, and (3) customer satisfaction. Further, it has partially tracked data on format consistency, by collecting data on one of two required products. However, the agency has not tracked data on the other measures for a number of reasons. For example, the agency did not track the format consistency for the second of the two required products because, until recently, the briefing has not had a consistent format. Also, the agency is not tracking training completion because it has not yet determined what standardized training will be provided. For the forecast accuracy measure, agency officials stated that they do not currently have the means to track this measure, but that they are developing a tool to do so. Of the measures it is tracking, NWS has established baselines and reported its results on two measures and has partially done so for two other measures. Specifically, NWS has established baselines on each center weather service unit’s participation in the Collaborative Convective Forecast Product and its organizational service provision. In addition, NWS has partially established a baseline on the format consistency measure in that it has historical data for one of the two required products. However, because it has not tracked the format consistency of the second product, NWS has not established a complete baseline for that measure. Further, while NWS has calculated customer satisfaction scores from its 2009 site evaluations, it does not yet have a reliable baseline because it has not yet matured its approach to documenting this measure. Specifically, NWS changed its approach during its 2010 site evaluations, which will make it harder to compare scores from year to year. Moreover, the agency mixed positive and negative findings to come up with its rating scores for some sites, thereby rendering the 2009 scores at selected sites ineffective at measuring a site’s performance. Figure 3 identifies NWS’s efforts to track data, develop baselines, and report on the performance measurement areas. It is important for NWS and FAA to track performance in the identified measures in order to understand the value currently provided and to assess the impact of any changes they make to operations. Reporting also helps improve performance. For example, after reporting on its performance in product participation and organizational service provision for 2009, NWS noted significant improvements in 2010. Until the agencies track and develop a performance baseline for all applicable measures, they will be limited in their ability to evaluate progress that has been made and whether or not they are achieving their goals. In addition, until NWS regularly reports on its performance, the agencies lack the information they need to determine what is working well and what needs to be improved. Moreover, as the agencies refine their approach to performance measurement, it will be important to revisit and refine the performance measures to ensure an appropriate mix of process- and outcome-oriented measures. For example, NWS could consider measuring the number of aircraft incidents attributed to inaccurate aviation weather forecasts or the number of weather-related delays as a percentage of all delays. In September 2009, we identified three challenges that FAA and NWS faced in modifying the current aviation weather structure: (1) achieving interagency collaboration, (2) defining requirements, (3) aligning changes with the Next Generation Air Transportation System (NextGen)—a long- term initiative to increase the efficiency of the national airspace system. The agencies have taken initial steps to collaborate, refine requirements, and look for ways to align their plans with NextGen, but they have not yet fully addressed the challenges. Until these fundamental challenges are addressed, the agencies are unlikely to achieve significant improvements in the aviation weather services provided at en route centers. We have previously reported on key practices that can help enhance and sustain interagency collaboration. The practices generally consist of two or more agencies defining a common outcome, establishing joint strategies to achieve the outcome, agreeing upon agency roles and responsibilities, establishing compatible policies and procedures to operate across agency boundaries, and developing mechanisms to monitor, evaluate, and report the results of collaborative efforts. In September 2009, we reported that NWS and FAA had not defined a common outcome for modifying the aviation weather services provided at en route centers, established joint strategies, or agreed upon their respective responsibilities. We recommended that the agencies complete these activities. NOAA and the Department of Transportation agreed with our recommendation. Since September 2009, NWS and FAA have made progress in defining a common outcome, but have not yet established joint strategies to achieve the outcome or agreed upon agency responsibilities. Specifically, in July 2010, the two agencies defined a common outcome when they reached an agreement to continue the current center weather service unit configuration at each of the 21 en route centers and to take immediate steps to improve aviation weather services. The two agencies also plan to form a team that will develop an implementation plan by November 2010. However, the agreement does not provide the details needed to establish joint strategies and only provides general agency responsibilities. Until the agencies establish joint strategies and agree on respective agency responsibilities, it may prove difficult to move forward in efforts to improve aviation weather services. According to best practices of leading organizations, requirements describe the functionality needed to meet user needs and perform as intended in the operational environment. A disciplined process for developing and managing requirements can help reduce the risks associated with developing or acquiring a system or product. In September 2009, we reported that FAA’s requirements were unstable and recommended that the agencies establish and finalize requirements for aviation weather services at en route centers. NOAA and the Department of Transportation agreed with our recommendation. FAA updated its requirements in December 2009 based on the work that the two agencies did in the fall of 2009. However, these changes were nullified by the more recent decision to continue with 21 center weather service units. In July 2010, the two agencies agreed to establish a joint team to develop firm requirements for NWS products and services supporting FAA’s air traffic flow management out to 2015, including those provided by the center weather service units. While this is an important step, significant work remains to be done to revise these requirements. Until the requirements are in place, the agencies may find it difficult to move forward in efforts to improve aviation weather services. In September 2009, we reported that neither FAA nor NWS had ensured that their restructuring plans fit with the national vision for NextGen—a long-term initiative to transition FAA from the current radar-based system to an aircraft-centered, satellite-based system. We recommended that the agencies ensure that any proposed organizational changes are aligned by seeking a review by the Joint Planning and Development Office, the office responsible for planning and coordinating NextGen. NOAA and the Department of Transportation agreed with our recommendation. Among other agreements in July 2010, the two agencies plan to work together to develop requirements and an implementation plan that extends through 2015—the NextGen Midterm Operating Capability date—by November 2010. However, because this plan has not been developed or approved, it is not clear that future actions will be aligned with NextGen. As NWS and FAA discuss the current proposal and plan improvements to aviation weather services, it will be important for the agencies to continue to ensure alignment with the long-term goals of NextGen. After many years of proposals and counterproposals for improving the center weather service units, NWS and FAA recently agreed to continue the current center weather service unit configuration at each of the 21 en route centers through September 2011 and to take immediate steps to improve aviation weather services. However, many questions remain about what will happen, when, and at what cost. Given the long history of unresolved issues between FAA and NWS regarding the center weather service units, it is more important than ever that the two agencies be extremely clear on what their commitments entail. An important component of any effort to improve operations is a solid understanding of current performance. While NWS has made progress in measuring the performance of the center weather service units, is not adequately documenting performance baselines or reporting on several of its performance measures. Further, the agency has begun efforts to measure customer satisfaction, but the process is immature, and the results are unreliable. Specifically, NWS has changed its approach to the annual evaluations making it difficult to compare performance from year to year, and its scoring process mixes positive and negative findings for several sites. As a result, the scores may not accurately reflect each center’s performance. Until NWS has a solid understanding of the current level of performance, it will be limited in its ability to evaluate what progress has been made and whether or not it is achieving its goals. As the agencies move forward with plans to make aviation weather services more efficient, they continue to face challenges, including a record of false starts on interagency collaboration, unstable requirements, and a lack of assurance that operational changes will align with the future vision of NextGen. Until these challenges are fully addressed, the agencies will likely find it difficult to make meaningful changes in aviation weather services. To improve the aviation weather products and services provided at FAA’s en route centers, we are making three recommendations to the Secretaries of Commerce and Transportation. Specifically, we recommend that the Secretaries direct the NWS and FAA Administrators to define, document, and sign the agencies’ recent agreements on (1) the locations of the center weather service units, (2) immediate improvements in aviation weather services and operating hours, and (3) the development of an implementation plan for improvements through 2015; ensure that NWS regularly tracks progress, documents performance baselines, and reports on its format consistency, forecast accuracy, and training performance measures; and ensure that NWS develops a reliable customer satisfaction baseline by refining the questions used during annual evaluations, so that comparable information is collected from year to year, and revising the scoring process to ensure that scores accurately reflect each center’s performance. In addition, we are reiterating our prior recommendations to the two agencies to address key challenges in achieving interagency collaboration, defining requirements, and aligning any organizational changes with plans for NextGen. We received written comments on a draft of this report from the Secretary of Commerce, who transmitted NOAA’s comments (see app. III). In its comments, NOAA stated that our report is generally representative of challenges facing NWS and FAA in the execution of aviation weather services provided by the center weather service units. The agency agreed with our recommendations and identified plans to implement selected parts of the recommendations. Specifically, NOAA reiterated its plan to form a joint NWS/FAA team to determine weather requirements for traffic flow management and to implement products and services through the year 2015. NOAA stated that this team’s results will serve as additional documentation of the agreements. In addition, NOAA reported that it plans to begin measuring format consistency in September 2010 and forecast accuracy in December 2010. NOAA also noted that it was using 2009 site evaluations as the basis for its scoring and that the 2009 results would serve as a baseline for comparison to 2010 and subsequent results. However, as we discuss in the report, our analysis of the 2009 site evaluations and scoring process found that the results were not reliable because the process for collecting information on customer service was inconsistent, and the scores did not always accurately reflect the centers’ performance. As a result, the 2009 scores are not useful as a baseline or as a feedback tool. Moving forward, as NOAA analyzes the results from its ongoing 2010 site evaluations, it will be important to ensure that the scores accurately reflect each center’s performance. Further, in future years, it will be important to ensure that comparable information is collected from year to year so that a reliable performance baseline can be established. The Department of Transportation’s Director of Audit Relations provided comments on a draft of this report via e-mail. In those comments, he noted that the department agreed to consider our recommendations. Both departments also provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Commerce, the Secretary of Transportation, the Director of the Office of Management and Budget, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-9286 or by e-mail at pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to (1) determine the status of the agencies’ efforts to restructure aviation weather services and products, (2) assess the agencies’ progress in establishing performance baselines in order to measure the effect of any changes, and (3) evaluate plans to address key challenges. To determine the status of the agencies’ efforts to restructure aviation weather services and products, we analyzed Federal Aviation Administration (FAA) and National Weather Service (NWS) documentation, including FAA’s requirements for center weather service units, the interagency agreement between FAA and NWS, and NWS’s proposals to meet FAA needs for center weather service units. We also interviewed officials from both agencies to discuss their plans and status in reaching a decision on proposed changes. To assess the agencies’ progress in establishing performance baselines, we identified the agencies’ previous efforts to establish baselines and evaluated the extent to which they have made progress in doing so. We analyzed NWS’s approach to measuring center weather service unit performance and compared its performance measurement practices with guidance and best practices in performance management identified by government and industry. Specifically, we assessed the agencies’ actions taken to identify performance measures, track them, establish baselines of performance, and report on those baselines. We also assessed the reliability of the performance data that NWS reported. Specifically, for the customer satisfaction measurements, we analyzed supporting data and calculated customer satisfaction scores using NWS’s guidance for developing scores. We then compared the scores we calculated with NWS’s scores. In instances where our scores did not match NWS’s, we interviewed agency officials in order to determine why NWS’s scores did not match our own, focusing on four sites with the largest number of findings. We found that the agency’s customer satisfaction data was not reliable. For the other reported measures, we evaluated supporting data and interviewed responsible agency officials to determine the agency’s processes for validating the data. We found that the data reported for these performance measures was sufficient to meet our reporting purposes. To evaluate plans to address key challenges identified in our prior report, we reviewed agency documents including FAA requirements, an NWS proposal, plans for the Next Generation Air Transportation System (NextGen), and FAA’s response to NWS’s proposal. We compared agency efforts with leading practices in industry and government on interagency collaboration and system development. In addition, we interviewed the FAA contracting officer’s technical representative for the center weather service units to discuss the challenges the agency would have in implementing NWS’s proposal, as well as the agency’s plans to ensure requirements were stabilized. We also interviewed NWS officials to discuss their plans for aligning their system development initiatives with NextGen. We also interviewed the co-chair of the weather working group of the Joint Planning and Development Office to determine whether the office had reviewed NWS’s proposal and if the office had concerns about the proposal’s impact on NextGen. We conducted our work at National Oceanic and Atmospheric Administration (NOAA) and FAA facilities in the Washington, D.C., metropolitan area. We conducted this performance audit from October 2009 to September 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 6 lists concerns that FAA identified in a series of studies between 2003 and 2006, as well as the steps that NWS has taken to address these concerns. In addition to the individual named above, Colleen Phillips, Assistant Director; Neil Doherty; Rebecca Eyler; Joshua Leiling; and Jessica Waselkow made key contributions to this report. | The National Weather Service's (NWS) weather products are a vital component of the Federal Aviation Administration's (FAA) air traffic control system. In addition to providing aviation weather products developed at its own facilities, NWS also provides on-site staff at each of FAA's en route centers--the facilities that control high-altitude flight outside the airport tower and terminal areas. NWS's on-site staff is called a center weather service unit. For several years, NWS and FAA have been exploring options for improving the aviation weather services provided at en route centers. GAO agreed to (1) determine the status of the agencies' efforts to restructure aviation weather services, (2) assess the agencies' progress in establishing performance baselines in order to measure the effect of any changes, and (3) evaluate plans to address key challenges. To do so, GAO evaluated agency progress and plans and compared agency efforts with leading practices. After developing and shelving four proposals for restructuring the center weather service units over the last 5 years, in July 2010, senior NWS and FAA officials agreed to continue the current center weather service units at each of the 21 en route centers through September 2011 and to take immediate steps to improve aviation weather services by (1) having the service units provide forecasts for 10 key FAA terminal radar facilities and (2) having nearby weather forecast offices support FAA's en route centers when the service units are closed for the night. In addition, the agencies agreed to establish a joint team to baseline current capabilities and develop firm requirements for aviation weather services supporting air traffic flow management. While this agreement is important, the details have not been fully defined. Thus, it is not yet clear what will happen to the 21 service units after September 2011, when the immediate improvements in services will be in place, whether there are any costs associated with these steps, and who will pay for them. Until the two agencies further define their plans, the risk remains that the agencies will misjudge their responsibilities and not fulfill their agreements. FAA and NWS have made progress in identifying performance measures for the weather service units located at FAA en route centers, and NWS is beginning to track its service units' performance. However, NWS has not yet tracked, established baselines for, and reported to FAA on all applicable performance measures. Specifically, of seven possible performance measures, NWS is tracking performance for three of the measures and partially tracking a fourth measure. Of these four measures, the agency has established a sound baseline and reported on two of these measures and has made partial progress on two others. The agency is not tracking performance, documenting baselines, or reporting on three of the measures because it has not yet determined how to track them. Without an understanding of the current level of performance of the identified measures, the agencies will be limited in their ability to evaluate what progress has been made. In addition, until NWS regularly reports on its performance, the agencies lack the information they need to determine what is working well and what needs to be improved. In September 2009, GAO identified three challenges in modifying NWS's aviation weather services provided at FAA's en route centers: achieving interagency collaboration, defining requirements, and aligning changes with the Next Generation Air Transportation System (NextGen)--a long-term initiative to increase the efficiency of the national airspace system. The agencies have not yet fully addressed these challenges. Specifically, while senior agency officials recently agreed on how to proceed, work remains to be done to refine requirements, develop and execute an implementation plan, and to ensure that improvements are aligned with the long-term vision for NextGen. Until these fundamental challenges are addressed, the agencies are unlikely to achieve significant improvements in the aviation weather services provided at en route centers. GAO recommends that the Departments of Commerce and Transportation define their agreements, refine performance management processes, and address key challenges. In commenting on a draft of this report, Commerce agreed with GAO's recommendations and identified plans to address them; Transportation agreed to consider the recommendations. |
The potential needs of the reentry population vary and generally cross over several areas, as shown in figure 1. For example, according to the BOP Director’s statement to Congress in March 2012, most inmates need assistance with things such as job skills, vocational training, education, substance abuse treatment, and parenting skills if they are to successfully reenter society. Further, according to the Federal Interagency Reentry Council, about 66 percent of inmates have substance abuse or dependence issues, and 24 percent have mental illness issues. In addition, according to various Urban Institute reports on reentry, between 54 and 40 percent of former inmates were not able to obtain employment within 7 to 10 months of release. In addition, former inmates are subject to a wide variety of legal and regulatory sanctions and restrictions, which are referred to as collateral consequences. BOP provides reentry services to inmates within federal prisons (see app. I for a list of services). Other federal agencies, through their reentry grant program funds, assist state and local entities in providing reentry services to the reentry population that may return to their communities. For example, a community nonprofit organization may receive a federal grant to assist members of the reentry population in their job training skills following inmates’ release from prison. Such services funded through the grant may include job placement or vocational training, such as construction. Federal grant programs are generally created by statute and funded through appropriations. Competitive grants are announced through solicitations—or announcements to applicants of funding opportunities— and a single program may award funding through multiple solicitations. Once a grant is awarded, statutes may require that a primary grant recipient—that is, the one to whom the federal agency makes the original award—then award a portion of its grant to a subgrantee. Where statutes do not require subgranting, a grantee may voluntarily choose to award all or a portion of its funds to subgrantees. Further, federal agencies’ monitoring of grantee performance is important to help ensure that grantees are meeting program and accountability requirements. Table 1 describes the phases of the federal grant life cycle and the common activities agencies engage in within each phase. Federal grants to assist with reentry efforts have been in place for several years and have had a number of incarnations. Two of these former efforts are the Serious and Violent Offender Reentry Initiative (SVORI) and the Prisoner Reentry Initiative (PRI). SVORI—a $300 million collaborative effort among DOJ, Labor, HHS, and the Departments of Education and Housing and Urban Development—began in 2002. The goal of the SVORI grant program was to reduce recidivism among high-risk offenders—those who faced multiple challenges upon returning to the community from incarceration. SVORI concluded in fiscal year 2005, but its goals continued through the PRI, which DOJ and Labor administered. The PRI grant program focused on reducing recidivism by helping former inmates find work and providing them access to other critical services in their communities. PRI concluded in fiscal year 2008 when its appropriation expired. Since PRI’s conclusion, DOJ, Labor, and HHS each have implemented grant programs that support reentry services at the state and local levels. The Second Chance Act of 2007 authorizes the Attorney General to administer federal grants to state and local government agencies, territories, or Indian tribes, or any combination thereof, in partnership with stakeholders, service providers, and nonprofit organizations to provide employment assistance, substance abuse treatment, housing, mentoring, and other services that can help reduce recidivism. DOJ administers these grants through the SCA program and has awarded funding through a number of SCA solicitations. Under the Second Chance Act of 2007 and the Workforce Investment Act of 1998, Labor implemented its RExO program and has awarded funding through a few RExO solicitations. RExO is designed to strengthen communities to which the majority of former inmates return to through an employment-centered program that incorporates mentoring, job training, and other comprehensive transitional services. This program seeks to reduce recidivism by helping former inmates find work when they return to their communities. Finally, HHS developed the ORP solicitation under its Programs of Regional and National Significance grant program. The purpose of ORP is to expand or enhance substance abuse treatment and related recovery and reentry services to former inmates returning to the community. In 2010, we were directed to identify programs, agencies, offices, and initiatives with duplicative goals and activities within departments and government-wide and report annually to Congress. In March 2011 and February 2012, we issued our first two annual reports in response to this requirement. In our 2012 report and a subsequent follow-on report, we found that of the 253 grant solicitations that DOJ issued in fiscal year 2010, there was overlap across 10 justice areas, including corrections, recidivism, and reentry. We reported that this overlap contributed to the risk of unnecessarily duplicative grant awards for the same or similar purposes. We also reported that DOJ generally lacked awareness of the extent to which its grant programs overlapped and thus was not positioned to minimize the risk of potential unnecessary duplication before making grant awards. In the July 2012 report that expanded on these findings, we recommended, among other things, that DOJ assess its grant programs for overlap and that DOJ require its grant applicants to report past, current, and prospective federal funding it has or plans to receive. DOJ agreed with our recommendations and has begun to take steps to implement them, such as exploring options to carry out an assessment to determine the extent of unnecessary duplication, if any, and the risk associated with unnecessary program duplication. DOJ, Labor, and HHS separately provided new or continuation grant funding to support direct services to the adult reentry population through nine grant programs in fiscal year 2011. Since more than one federal agency is involved in this same broad area of national interest, these programs are fragmented. As shown in table 2, these agencies awarded about $630 million to new grantees in that year. In some cases, the program is exclusively for reentry—as is the case with Labor’s RExO program. In other instances, such as DOJ’s Edward Byrne Memorial Justice Assistance Grant program, grantees may use the money for reentry-related services, but they may also use it for other criminal justice- related matters, such as indigent defense. Fragmentation of these federal grant programs is due in part to the legislative creation of the programs. For example, under the Second Chance Act of 2007, DOJ is directed to administer federal grants to provide employment assistance, substance abuse treatment, housing, mentoring, and other services that can help reduce recidivism. Accordingly, DOJ developed the SCA grant program, and issued a variety of solicitations under this grant program. While HHS is required to address priority substance abuse treatment needs of regional and national significance, the Secretary may carry out these activities directly or through grants or cooperative agreements, and accordingly, HHS developed the ORP solicitation. When considering, collectively, which applicants are eligible for the grant programs, the extent to which the reentry population is the sole target of the grant programs’ services, and the primary services these grant programs fund, we found that overlap across the nine programs was minimal. Therefore, the risk of duplication—when two or more agencies or programs are engaged in the same activities, provide the same services to the same beneficiaries, or provide funding for the same purpose—is low. With respect to applicant eligibility, because there are three primary categories of applicants—state and local governments; tribal governments; and private, nonprofit, and community-based organizations—there is some overlap in this area. Specifically, as illustrated in table 3, five of the nine grant programs extended eligibility to all three categories of applicants. However, one allowed only state and local government applicants, and another allowed only private, nonprofit, or community-based applicants. Analyzing the data from the vantage point of the applicants themselves, state and local government agencies could apply to eight of the nine programs; tribal governments could apply to seven; and private, nonprofit, or community-based could apply to six. Nevertheless, with respect to the extent to which the grant programs targeted the reentry grant population, we found greater variation and less overlap. Across the nine programs, as table 4 illustrates, three restricted, or targeted, their funds exclusively for use in assisting the reentry population. These were DOJ’s SCA program, Labor’s RExO program, and HHS’s Health Improvement for Re-entering Ex-offenders Initiative. Another five programs offered a range of solicitations, but at least one of these programs’ solicitations exclusively targeted the reentry population. For example, HHS issued a solicitation for ORP under its Programs of Regional and National Significance Program. Last, one program—DOJ’s Edward Byrne Memorial Justice Assistance Grant Program—was so broad as to encompass reentry amongst a number of other criminal justice or corrections uses. Since more than half of the programs target populations other than the reentry population, the overlap in this area is minimal. We also found greater variation, and thus less overlap, when assessing the primary services these nine grant programs fund, as shown in table 5. Across the nine programs, one grant program covered a wide range of reentry services; two programs’ primary services were mental health and substance abuse; one program’s primary services were employment and life, family, and parenting skills; and the remaining five programs had one or no primary use of funding. For example, DOJ’s Residential Substance Abuse Treatment for State Prisoners primarily funded substance abuse treatment for state prisoners, and Labor’s RExO program primarily funded services for employment assistance. Analyzing the data from the vantage point of the primary services, the greatest number of programs—four of the nine—focused funding primarily on substance abuse treatment, a different grouping of three programs focused its funding for health issues, another set of three focused on mental health and substance abuse treatment, and another set of three focused on employment. Because of the range in primary services that these programs fund, the overlap in this area is minimal as well. When considering the three areas together—applicant eligibility, targeting of services, and primary services funded—the overall overlap is minimal. Specifically, there were variations in the applicant eligibility standards and target populations, even when grant programs allowed spending for the provision of similar services. For example, Labor’s reentry program limits eligibility to private, nonprofit organizations that will use the funds primarily to assist current or former inmates—residing in or released from any facility—with their employment needs. In contrast, one of DOJ’s reentry programs limits eligibility to governmental entities that will use the funds primarily to assist current or former inmates—residing in or released from state, local, or tribal facilities—with their substance abuse treatment needs. As we have previously reported, having multiple agencies with varying expertise involved in delivering services can be advantageous. For example, agencies may be better able to tailor programs to suit their specific missions and needs. We have also previously reported that overlap among grant programs may be desirable because such overlap can enable granting agencies to leverage multiple funding streams to serve a single purpose. For example, according to DOJ officials, they encourage grantees to use multiple streams of funding to fully implement their projects when local and federal funding is limited. Further, federal agency officials from DOJ, Labor, and HHS stated that reentry can be enhanced by coinvestment—where a variety of entities in one community are receiving funds from multiple sources to assist with reentry—as these reentry programs can complement one another. We observed the benefits of this coinvestment when we interviewed grantees. For example, one of the nine grantees we interviewed received funds in 2011 from two different grant programs—ORP and RExO. These two funding streams helped the grantee provide both substance abuse treatment and employment assistance to the reentry population it served. Another grantee received a HHS Healthy Marriage Promotion and Responsible Fatherhood Grant in fiscal year 2011, and also received a RExO grant in 2012. The former assisted fathers reentering the community to develop parenting, relationship, and money management skills, while the latter grant would be used to assist both male and female former inmates with obtaining employment. Further, a few grantees stressed that the reentry population had various needs and that it is important that not just one need be met, but that the full array of services be available to prevent recidivism. According to Labor officials, given the volume of ex-offenders that are released each year, competition for limited reentry assistance from service providers in their communities is stiff. Of the more than 700,000 inmates released each year, according to each agency’s most recent annual data, the SCA program provided services to approximately 6,600; the RExO program provided services to about 7,500; and the ORP program provided services to about 3,300. Although the overlap is minimal across applicant eligibility, program targeting, and the services the grant programs fund—and the risk for duplication is therefore low—we have previously reported that the existence of overlapping grant programs is an indication that agencies should increase their awareness of where their funds are going. have also reported that in addition to increasing their individual awareness, granting agencies should coordinate to ensure that any resulting duplication in grant award funding is purposeful rather than unnecessary. According to DOJ officials, it is in the best interest of each agency to know where there is active overlap between existing inmate reentry projects, as this allows for coordination of service delivery and the leveraging of federal resources, if appropriate. As we discuss in the next section of this report, DOJ, Labor, and HHS have implemented a number of mechanisms, partly in recognition of the overlap that does exist, to coordinate their granting efforts. Furthermore, officials acknowledge that even more can be done to increase awareness over the flow of federal funds and manage the risk, however low it may be, of unnecessary duplication. GAO-12-517. With acknowledgment of some overlap, DOJ, Labor, and HHS have taken a variety of steps to coordinate their reentry efforts as a means to prevent unnecessary duplication and share promising practices. The steps are consistent with best practices for interagency collaboration, and include intra- and interagency working groups, the collective Federal Interagency Reentry Council, and a national resource center to obtain information, such as promising practices. Intra-agency coordination. Recognizing some overlap across their grant programs, both DOJ and HHS developed intra-agency working groups to internally coordinate their reentry efforts. For example, DOJ launched Project Reentry in 2010 to “focus federal resources on increasing public safety and maximizing the efficient use of public safety dollars by reducing recidivism rates.” According to DOJ officials, DOJ has some of the same members on Project Reentry as it has on the Federal Interagency Reentry Council to ensure that communication and collaboration is in place between the two groups. According to DOJ officials, Project Reentry provides opportunities for DOJ components to communicate; coordinate; brainstorm; and implement projects, initiatives, and ideas focused on improving outcomes in prisoner reentry. Efforts of Project Reentry include organizing workshops on reentry issues and supporting reentry courts by developing a tool kit on reentry. According to HHS officials, in 2010, its working group developed an agency-wide inventory of HHS efforts to assist incarcerated and reentering individuals and their families. According to a HHS official from the office that coordinated the inventory efforts, the primary purpose of the inventory is to serve as a resource document so that HHS officials are aware of what projects are going on and who is working on them. Although the official stated that the working group no longer has regular meetings, members now informally coordinate and participate in the Federal Interagency Reentry Council. Interagency coordination. Agency officials from DOJ, Labor, and HHS report that they have developed strong partnerships with their counterpart grant makers as a result of prior collaborative initiatives, such as SVORI and PRI. Although officials from DOJ and HHS reported that some of this grant coordination is informal and ad hoc, DOJ, Labor, and HHS have developed more formal and ongoing coordination mechanisms, as well. For example, DOJ’s Bureau of Justice Assistance and HHS’s Substance Abuse and Mental Health Services Administration first developed a memorandum of understanding in 2009 to improve formal coordination and communication in various programmatic areas, including reentry. Specifically, the agreement states that these agencies will coordinate on the development of grant solicitations, grantee conferences, and the vetting of relevant publications, among other things. Reference to this agreement is also included in subsequent ORP grant solicitations, stating that these agencies “share a mutual interest in supporting and shaping offender reentry-treatment services, as both agencies fund ‘offender reentry’ programs . . . ORP grantees will be expected to seek out and coordinate with any local federally-funded offender reentry initiatives including ‘Second Chance Act’ offender reentry programs, as appropriate.” The memorandum assists these agencies in establishing a mutually reinforcing or joint strategy, consistent with best practices for interagency collaboration. Agency officials reported that their interagency coordination has encouraged personal relationships among grant- administering staff and as a result, they are in contact at various phases in the grant life cycle. For example, officials from all three agencies said they are sharing some draft grant solicitations with one another to obtain feedback before issuing them. DOJ and Labor officials stated that they share the solicitations when the subject matter is relevant and not on a routine basis with all federal agencies. DOJ officials also stated that they are sharing lists of funded grant recipients with Labor, and that they publicly announce grant award decisions. Federal Interagency Reentry Council. To enhance coordination across the federal agencies involved in reentry activities, the council’s working group has taken several actions since its inception in 2011. Consistent with best practices for interagency collaboration, the council has helped agencies to define and articulate a common outcome, establish mutually reinforcing or joint strategies, identify and address needs by leveraging resources, and agree on roles and responsibilities. Specifically, the Federal Interagency Reentry Council has Inventoried all major federal reentry programs, including grant programs that supported reentry services in fiscal years 2009 and 2010, and Federal Interagency Reentry Council officials stated that they continue to update the inventory to include resources available in 2011 and future years. According to HHS officials, the council modeled this effort after HHS’s initiative to develop its intra-agency inventory. Further, HHS officials stated that understanding what resources are available is the first step to preventing unnecessary duplication. Convened research staff from 12 of its member agencies to regularly share information about reentry research and identify opportunities for research collaboration. Supporting the collaborative efforts of the council, officials from HHS, DOJ, and the Department of Commerce’s Census Bureau convened a research conference in January 2012 to discuss developing and improving federal household survey measures relating to incarceration. According to a HHS official, such measures would increase knowledge of the effects of incarceration and reentry on individuals and their families. Working with the Office of Management and Budget, developed an interagency intranet site for the council, which allows all federal agencies to share key documents and resources. Information included on the site includes PowerPoint briefings and reentry-related recommendations. In addition to its efforts to coordinate across federal reentry grant programs, according to member agency officials, the Federal Interagency Reentry Council has been focused on reducing the barriers that exist for the reentry population. For example, the council has taken several actions to address collateral consequences of criminal convictions—these are the laws and policies that restrict former inmates from things such as employment, welfare benefits, access to public housing, and eligibility for student loans for higher education. Such collateral penalties place substantial barriers to an individual’s social and economic advancement and can challenge successful reentry. Appendix II provides a summary of the council’s efforts to reduce reentry barriers and to achieve its other goals. The National Reentry Resource Center. The Second Chance Act provided for the establishment of the National Reentry Resource Center, which was established in 2008. DOJ partially funds the center, and under a cooperative agreement, the Council of State Governments Justice Center manages it. The center’s staff provide education, training, and technical assistance to states, tribes, territories, local governments, service providers, nonprofit organizations, and corrections institutions working on reentry issues. The National Reentry Resource Center’s mission is to advance the reentry field through knowledge transfer and dissemination and to promote evidence-based best practices. Some of the activities the National Reentry Resource Center staff, along with key stakeholders, have undertaken include the development of the Reentry Services Directories, National Criminal Justice Initiatives Map, a library of reentry resources, and a website known as the What Works in Reentry Clearinghouse, among other things. Reentry Service Directories. In 2009, the National Reentry Resource Center catalogued state-led reentry efforts and launched a nationwide online directory of state reentry coordinators. Understanding the important role local governments play in reentry, in partnership with other stakeholders, the center has expanded the directories to include city- and county-led initiatives. National Criminal Justice Initiatives Map. Taking the inventory on federal reentry resources that the Federal Interagency Reentry Council assembled, the National Reentry Resource Center developed an online, interactive map that highlights major federal reentry initiatives and identifies reentry grantees in every state. The map seeks to provide a place-based catalog of national initiatives and programs designed to reduce the recidivism rates of people returning from prison, jail, and juvenile facilities. According to Federal Interagency Reentry Council and Council of State Governments Justice Center officials, this resource allows both federal staff and local stakeholders to identify reentry resources in their jurisdictions and coordinate more effectively at the local level. However, at present, the map does not include the flow of funds to subgrantees. For example, one grantee we interviewed in New York stated that its program did not provide direct services in the New York area— although the grantee is listed on the map as being a provider in New York. Rather, the grantee stated that its program provided funds to four of its affiliates in other states. Council of State Governments Justice Center officials stated that the map is based on the Federal Interagency Reentry Council’s inventory and is for informational purposes. Further, Federal Interagency Reentry Council officials stated that they continually work to update the inventory, and associated map, and that these efforts mark the first step to visually depicting the general flow of federal dollars. Five of the nine grantees we interviewed reported utilizing the map and finding it very useful. For example, three grantees reported that it was useful in helping them identify other resources in their jurisdictions. Three other grantees that had not used the map stated that they think it would be useful for future use. Library of reentry resources. The web-based library includes documents of interest to state and local policymakers, community and faith-based organizations, and the reentry population. Resources are organized by topic, such as juveniles, sex offenders, substance abuse, and mental health and include publications authored by organizations, researchers, service providers, and practitioners working in the reentry field. What Works in Reentry Clearinghouse. This website—launched in 2012—offers access to research on the effectiveness of a wide variety of reentry programs and practices. According to the website, it provides a one-stop shop for practitioners and service providers seeking guidance on evidence-based reentry interventions, as well as a useful resource for researchers and others interested in reentry. The clearinghouse currently includes information on employment, housing, and mental health, and the National Reentry Resource Center has plans to add additional issue areas. Since the site was recently launched, it is too soon to assess how grantees are using this website to inform their program design and implementation. Other efforts to share promising practices across agencies. In addition to some of the efforts listed above, DOJ, Labor, and HHS have, for example, held conferences or meetings for their grantees so that they may meet with one another, learn from panelists and presenters, and share information. DOJ officials stated that for the first time, in May 2012, its SCA conference was open to other federal agency reentry grantees. The grantees we interviewed stated that this type of coordination with other grantees has been, or would be, very useful, and that they learn information about other grantees through mechanisms such as conference calls and through their technical advisers. In addition, all three agencies share information on their agency websites about promising practices to sustain successful reentry efforts. Specifically, DOJ maintains the Crime Solutions website—CrimeSolutions.gov—which includes information to assist users with practical decision making and program implementation on specific justice-related programs, including reentry, and presents the existing evaluation research against standard criteria. The CrimeSolutions.gov and What Works in Reentry Clearinghouse are linked to each other. Further, Labor maintains a website for its RExO grantees to share information, such as stories of efforts of grantees, and HHS officials stated that they are in the process of fully implementing a similar website. Finally, the Council of State Governments Justice Center, with support from DOJ, launched a reentry program database in 2010, which highlights community-based reentry programs that self-report promising practices and policies that facilitate successful reentry. In addition to the steps that DOJ, Labor, and HHS have taken— independently and through the Federal Interagency Reentry Council—to coordinate reentry efforts, they have also taken, or plan to take, further action to reduce the potential that grantees are using funds from different agencies or programs for the same purpose. As our prior work at DOJ has shown, if an applicant, either as a grantee or as a subgrantee, receives multiple grant awards from overlapping programs, the risk of unnecessary duplication increases, since the applicant may receive funding from more than one source for the same purpose without federal agencies being aware that this situation exists. Such duplication may be unnecessary if, for example, the total funding received exceeds the applicant’s need, or if neither granting agency was aware of the original funding decision. To help guard against this, HHS requires its reentry grantees to provide current or potential funding information from applicants. Officials stated that they have used this information for some grant programs to help ensure that funds will not be awarded for activities that are already supported by other agencies. Further, in response to our findings and recommendations from prior work, which specifically addressed issues of overlap and the importance of DOJ having awareness of the other sources of funds that applicants may have applied for or are receiving, DOJ has plans under way to assess all of its grant programs to determine the extent of any unnecessary duplication. To assess individual grantee performance, DOJ, Labor, and HHS require their SCA, RExO, and ORP grantees to collect information on a variety of metrics, including those specific to recidivism. According to DOJ’s Bureau of Justice Statistics, there is no single definition of recidivism that is used universally. Instead, recidivism is composed of multiple measures, including, rearrest, reconviction, or a return to jail or prison with or without a new sentence—all of which indicate an individual’s return to the criminal justice system. Therefore, agencies require grantees to collect information on measures such as the number of program participants who are arrested or reincarcerated. In some cases, federal agencies may include all these measures in their assessment of how well grantees are doing to help inmates successfully transition to nonprison life. In other cases, an agency may use fewer measures. For the SCA grant program, DOJ defines recidivism as “a return to prison and/or jail with either a new conviction or as a result of a violation of the terms of supervision within 12 months of initial release.” Although DOJ officials have established a goal that SCA programs should reduce recidivism, they have not set a specific numeric target. Instead, officials stated that they compare the results individual grantees report in reducing recidivism with the average across all SCA grantees. DOJ officials stated that they are waiting for the results of an ongoing SCA program evaluation, which we discuss later in this report, so that they will have more information to determine what, if any, numerical targets would be most appropriate and what effect the SCA program has had on recidivism. Although DOJ officials have been collecting recidivism data from SCA grantees quarterly, they stated that they cannot use these data to determine the program’s impact on recidivism because they have concerns with the validity and reliability of data. Specifically, according to DOJ officials, some SCA grantees experienced difficulties accessing recidivism data, and as a result, data may not accurately reflect the criminal justice outcomes of the participants after they receive reentry services. For example, a grantee that is a county jail facility may not have access to criminal justice data outside its jurisdiction, which makes it difficult to track if a participant commits another crime in a different jurisdiction. To help address data reliability challenges, DOJ officials stated that, as of October 2012, they will require SCA grantees to report on recidivism measures once at the end of their grant period rather than every quarter, as previously required. DOJ officials told us that they believe the reduced frequency in reporting will give grantees more time to access and review data they acquire from secondary sources and result in numbers that more accurately reflect recidivism outcomes. In addition, DOJ officials stated that this change will provide DOJ staff with more time to provide SCA grantees targeted technical assistance in data collection and reporting, which they believe will help mitigate the challenge of acquiring data from secondary sources. In another step to help ensure the reliability of data DOJ collects, the department requires SCA grantees to report on the source of their data, as well as any steps taken to ensure its validity. For the RExO grant program, Labor defines recidivism as those cases in which an individual is “re-arrested for a new crime or re-incarcerated for revocation of the parole or probation order within 1 year of their release from prison.” If a participant is rearrested and subsequently released without being convicted of a new crime during that time, Labor stipulates that RExO grantees may remove these participants from the recidivism rate. Using this definition, Labor has set a target goal for its grantees that no more than 22 percent of all the participants a grantee serves should recidivate, which is half the national rate of recidivism at 12 months. Labor reported to Congress as part of its fiscal year 2013 Congressional Budget Justification that RExO grantees have achieved this goal with an average of 14 percent of RExO participant’s recidivating. However, Labor officials stated that recidivism can be a difficult outcome measure to track and they have had some concerns about the accuracy of data reported by grantees. As a result, according to Labor officials, they require RExO grantees to maintain documentation supporting the recidivism outcomes they report. During RExO program operations site visits, Labor officials stated that they review case files to ensure grantees are maintaining this documentation. Further, on an annual basis, Labor officials stated that they review all the performance data RExO grantees submit to ensure program outcomes have been reported for all participants. Additionally, Labor officials stated that the ongoing RExO program evaluation, discussed later in this report, will independently verify the recidivism outcomes reported by grantees. Although HHS officials stated that the department does not collect data on recidivism from its ORP grantees because no single definition of recidivism is used universally, HHS does require ORP grantees to report on the “criminal justice status” of program participants, which includes information on their arrest or incarceration. Using its definition, HHS has set a target goal for its grantees that 95 percent of all participants will have reported having no involvement with the criminal justice system for the 30 days prior to the reporting period—or no more than 5 percent of all participants reporting involvement with the criminal justice system during this time. HHS reported to Congress in its fiscal year 2013 Congressional Budget Justification that its ORP grantees active in fiscal year 2011 achieved this goal with 4.8 percent of participants’ reporting involvement with the criminal justice system during the 30 days prior to the reporting period. In contrast to Labor’s requirement that RExO grantees maintain documentation supporting the recidivism outcomes they report, HHS requires ORP grantees to have their participants self-report any interaction with the criminal justice system for the 30 days prior to each reporting period. According to HHS officials, they take steps to validate data and perform periodic audits to ensure their validity. Table 6 provides an overview of the measures each agency collects from its grantees to indicate recidivism. In addition to recidivism-specific metrics, DOJ, Labor, and HHS also require grantees to collect and report on performance information related to other grant purposes. For example, Labor’s RExO program is focused on reducing recidivism through employment assistance. Accordingly, Labor officials also require its grantees to monitor and report on the percentage of participants who enter employment, the employment retention rate, and the average earnings of program participants. Similarly, as HHS’s ORP program aims to expand or enhance substance abuse treatment and related recovery, HHS officials require its grantees to monitor and report on the rate of substance abuse relapse and the number of participants who receive inpatient or outpatient treatment. Further, DOJ developed a core set of performance measures that all SCA grantees are required to report on, such as the rate of successful program completion, but it also includes metrics particular to the specific SCA solicitation. For instance, since the SCA Family-Based Prisoner Substance Abuse Treatment solicitation requires grantees to involve families in treatment services, DOJ requires grantees to report on the number of family members who participate in services. DOJ, Labor, and HHS analyze recidivism data to improve grant program operations in a variety of ways, but agencies could enhance information sharing about the methods they use to collect and analyze data to determine and report on overall program effectiveness. Agencies require their reentry program grantees to submit performance reports, at varying intervals, using their respective web-based grant management systems. According to officials from all three agencies, they use data grantees provide to determine the effectiveness of individual grantees. If data indicate a problem, officials stated that they may visit a grantee’s operations in person or otherwise provide targeted technical assistance to improve program outcomes. Table 7 describes the systems each agency uses, the frequency with which grantees are required to report, and the frequency with which agencies analyze grantee data. The grant management systems DOJ, Labor, and HHS use to monitor grantee effectiveness have different functionalities that present different benefits to agencies and grantees in collecting and analyzing performance data to improve operations. Specifically, Labor and HHS require RExO and ORP grantees to use MIS and SAIS to submit performance reports. Although grantees are required to submit reports to Labor and HHS on a quarterly or semiannual basis, because the systems allow grantees to enter participant-level data directly, grantees may enter these data more frequently for case management purposes. In fact, both agencies expect their grantees to use the systems as case management tools. According to HHS officials, they require ORP grantees to regularly enter participant-level data and provide data analysis training so grantees can use data to inform program decisions. For instance, HHS officials stated that ORP grantees use SAIS to aggregate data to identify trends or gaps in services and then make adjustments as needed in their operations. Further, two RExO grantees we met with reported finding Labor’s MIS system useful, as they could use a single system for both case management and grant-reporting purposes. In contrast, one SCA grantee we interviewed stated that it had to develop its own case management systems to track participant-level data, since DOJ requires its grantees to enter aggregate, rather than participant-level, data into DOJ’s PMT. Because RExO and ORP grantees can use MIS and SAIS to enter participant-level data and may do so on a more frequent basis, Labor and HHS officials can monitor and take action in response to those data. For instance, Labor officials use MIS to generate a weekly report that provides them with a snapshot of performance across all RExO grantees. According to officials, they can review data from the weekly report to see how many participants entered employment or who was arrested or reincarcerated. If data reveal that a particular grantee is showing a lower than expected rate of entered employment or other result indicating a program challenge, Labor officials stated that they take action to work with the grantee to identify resources and technical assistance that could improve the performance outcome. One RExO grantee we met with stated that Labor technical assistants visited its operations site about three or four times each year for the duration of its grant and provided helpful assistance that the grantee believes resulted in increased program participation. Similarly, according to HHS officials, they use SAIS on an ongoing basis to monitor performance across ORP grantees. According to program officials, if SAIS data indicate an issue, they can initiate on- site clinical or administrative technical assistance on an as-needed basis to improve a program outcome. In contrast, DOJ collects aggregate-level data through PMT, which DOJ officials stated that they review quarterly. In addition, for certain grant programs, DOJ employs a semiannual review process that it calls GrantStat. Officials stated that during a GrantStat review, they assess PMT performance data and other relevant information, such as grantees’ semiannual narrative reports and input from DOJ’s technical assistance providers. DOJ’s goal during GrantStat is to determine how effective an overall grant program is in meeting its goals and which grantees may need targeted technical assistance, and in which areas, to improve their operations and participant outcomes. While DOJ has applied the GrantStat review process to several programs that it funds—as resources have permitted—officials stated that they used GrantStat specifically to assess the performance of selected SCA grantees in April and May 2011. As a result, DOJ officials stated that they had a better understanding of the quality of data that SCA grantees submit using PMT. They also stated that the assessment helped inform future funding decisions, such as which SCA grants funded in fiscal year 2009 should be continued. According to DOJ officials, planning is under way to determine the programs that will be prioritized next for GrantStat review. Although agency officials stated that they have had discussions about the capabilities of their systems, agencies have not formally met with one another, or through the Federal Interagency Reentry Council, to discuss the relative strengths and challenges of their systems, how frequently they collect and analyze grantee performance data, and how they determine overall program effectiveness. For example, according to Labor officials, they provided an informational overview of MIS to HHS officials, and provided HHS with access to MIS so officials could test the functionality of the system. In addition, DOJ officials stated that they had informational discussions with other members of the Federal Interagency Reentry Council, particularly Labor, about their performance measurement systems. Part of the Federal Interagency Reentry Council’s mission is to enhance communication, coordination, and collaboration across federal agency reentry initiatives. Further, we have previously reported on the importance of interagency coordination and information sharing across federal entities. We have also reported on the importance of measuring performance.information-sharing and collaborative forums, such as the one the Federal Interagency Reentry Council affords, all three agencies would have an opportunity to share information on (1) what data they collect, (2) how often they review and analyze data, and (3) what decisions their analyses inform to improve program operations and report results, as well as to consider the feasibility of adopting any promising practices as appropriate. DOJ, Labor, and HHS generally agreed that information sharing of this kind would be useful. Discussions going forward would need to consider things such as the design of each system, the strengths and limitations of the respective grant management systems vis-à-vis each agency’s grant management policies and requirements, and the cost and benefits of adopting promising practices. In addition to the program-monitoring activities that agencies have taken at the individual grantee level, DOJ and Labor have spent approximately $22 million to commission program evaluations to assess the effectiveness of selected reentry grant programs. Program evaluations are individual systematic studies conducted periodically or on an ad hoc basis to assess how well a program is working. They are often conducted by experts external to the program, inside or outside the agency, as well as by program managers. As we have previously reported, for programs where outcomes, such as reducing recidivism, may not be achieved quickly, or where their relationship to the program is uncertain, program evaluations may be needed in addition to performance measurement, to examine the extent to which a program is achieving its objectives.Accordingly, DOJ and Labor have commissioned program evaluations, examples of which are listed below. The Second Chance Act authorizes DOJ’s National Institute of Justice to evaluate the effectiveness of the SCA projects funded using a methodology that generates evidence of which reentry approaches and strategies are most effective. Accordingly, the National Institute of Justice commissioned evaluations of grant programs funded under two SCA solicitations—SCA Reentry Courts and SCA Adult Demonstration. DOJ estimates that a report providing final results for the SCA Reentry Courts will be completed in summer 2015 and that a report providing interim results of the SCA Adult Demonstration program will be completed in spring 2013. DOJ officials also told us that a report with final results of the SCA Adult Demonstration program should be completed in summer 2015. Labor commissioned a program evaluation of its RExO grant program with officials expecting final results in June 2014. The evaluation began in fiscal year 2008 and examines impacts on participants’ post- program labor market outcomes and rates of recidivism by comparing outcomes of RExO participants with the outcomes of randomly assigned individuals who are eligible for but do not receive RExO services. See appendix III for summary information regarding ongoing DOJ and Labor ongoing program evaluations. The findings of these evaluations will likely add to the information agencies have to demonstrate the overall effectiveness of these programs as currently implemented in reducing recidivism. But because these evaluations are ongoing, it limits the available evidence agencies’ have to demonstrate their effectiveness in reducing recidivism. Nevertheless, agencies already have the results of program evaluations that Labor and DOJ commissioned for PRI and SVORI—predecessor reentry programs to SCA and RExO that were intended to reduce recidivism. In terms of recidivism, the final PRI program evaluation published in January 2009 concluded that recidivism rates across all grantees appeared low at 1 year postrelease. However, the report noted that findings on recidivism should be interpreted with caution because “while required grantees to verify and document that participants were not re-arrested before entering data into MIS, site visits revealed that some grantee staff used a ‘no news is good news’ approach by recording that participants had not recidivated, even if they were not able to verify the outcome.” The report stated that recidivism outcome data were missing for about 12 percent of PRI program participants. Additionally, as noted in the evaluation report, the study did not include a control or comparison group and therefore was not intended to assess the effectiveness of PRI at improving program outcomes. DOJ’s National Institute of Justice’s evaluation of the SVORI program concluded that when compared with nonprogram participants, SVORI participants showed no discernible differences on outcomes with respect to recidivism. A subsequent report funded by DOJ concluded in February 2012 that additional research was necessary into the sequencing and effects of specific combinations of reentry services and that a longer follow-up period with program participants may be necessary to observe the positive effects of the SVORI program on participants’ criminal behavior and interactions with the criminal justice system. According to DOJ officials, the design of the ongoing SCA Adult Demonstration evaluation includes assessing the types, intensity, and quality of the services being provided over 3 years. Further, a 2010 DOJ Inspector General report identified program deficiencies with both PRI and SVORI. For instance, the report found that SVORI and PRI grantees were not required to identify a baseline recidivism rate that would be needed to calculate any changes in recidivism rates as a result of the program. Additionally, SVORI solicitations issued between 2002 and 2004 did not specify a time frame after release in which to track a program participant’s recidivism. As noted in the Inspector General report, a time frame after release in which to track recidivism outcomes is needed so that progress can be demonstrated and outcomes compared at varying points during the monitoring period. In addition, the report recommended, among other things, that agencies require reentry grantees to establish baseline recidivism rates to facilitate comparison of recidivism rates between participants of reentry programs and nonparticipants. For both the SCA and RExO reentry grant programs, DOJ and Labor have taken steps to address some of the deficiencies. For example, DOJ requires its SCA grantees to provide a baseline recidivism rate they can use later to determine program impact, if any, on recidivism. Additionally, both DOJ and Labor have specified a 12-month time frame after release from prison or jail by which to measure recidivism. Further, according to Labor officials, as a result of the deficiencies identified with the PRI and SVORI programs, the department implemented several steps, including annually reviewing data, to ensure the reliability and validity of the recidivism data that RExO grantees report. In contrast, HHS officials stated that although they do conduct program evaluations, they have not done this for ORP because of its size compared with other HHS grant programs. According to HHS’s Office of the Inspector General, HHS is the largest grant-making organization in the federal government, awarding $370 billion in grants in fiscal year 2010. However, HHS does permit ORP grantees to spend up to 20 percent of their grant funds on program evaluations and data collection. According to HHS officials, they collect and periodically review these evaluations and have used the findings, alongside other research, to change elements of program design. For example, officials stated that they changed the ORP solicitation to require that grantees work with correctional facilities to ensure a smoother transition and greater continuity of treatment services as an inmate transitions to community- based treatment. However, officials stated that the majority of performance data they use to analyze ORP’s overall program effectiveness is gathered through the information individual grantees report using SAIS. Given the number of federal agencies involved in reentry, the high levels of recidivism, and current resource constraints facing the federal government, it is important that federal agencies be well aware of how their grant funds are spent and monitor grantee performance to ensure the highest return on federal investment. Accordingly, federal agencies have taken a variety of actions to enhance coordination to prevent unnecessary duplication and monitor grantees performance. These actions include developing a memorandum of understanding to improve formal coordination and communication, sharing draft grant solicitations with one another to obtain feedback before issuing them, and inventorying all major federal reentry programs. Additionally, as multiple agencies are involved in federal efforts to reduce recidivism, they have an opportunity to learn from one another about promising approaches for collecting and analyzing data and making determinations about individual grantee and overall grant program effectiveness. Given that the effect of prior reentry efforts—SVORI and PRI—on recidivism was inconclusive, effective analysis of recidivism data gathered from current reentry programs is particularly important. However, DOJ, Labor, and HHS officials have not formally shared information on the relative strengths and limitations of the respective grant management systems and their unique approaches to monitoring outcomes. By taking action to share information on how well their grantees reduce recidivism, agencies could leverage existing collaborations, such as the Federal Interagency Reentry Council, and further strengthen their program management. To better utilize the performance information they collect from grantees, enhance the capacity of their respective grant management systems, and improve overall management of reentry programs designed to reduce recidivism, we recommend that the Attorney General, the Secretary of Labor, and the Secretary of Health and Human Services maximize existing information-sharing forums, such as the Federal Interagency Reentry Council, to (1) share details on how agencies collect and analyze their data, as well as how they determine program effectiveness, and (2) consider the feasibility of adapting any promising practices in the future. We provided a draft of this report to DOJ, Labor, and HHS for comment. We received written comments from each that are reproduced in appendixes IV through VI, respectively. In addition, DOJ and HHS provided technical clarifications, which we incorporated where appropriate. DOJ concurred with the recommendation in this report. Labor and HHS did not specifically state whether they concurred with our recommendation. All three departments reported that they would establish a subcommittee of the Federal Interagency Reentry Council Staff Working Group in the first quarter of fiscal year 2013 to (1) share performance measures, (2) assess and monitor grant performance information collected from grantees with a goal of improving overall management of reentry programs designed to reduce recidivism, and (3) communicate best practices for improving the coordinated delivery of evidenced-based services. These proposed steps, if implemented, would address the intent of our recommendation. We are sending copies of this report to the Attorney General, the Secretary of Labor, the Secretary of Health and Human Services, selected congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. According to the Department of Justice’s (DOJ) Federal Bureau of Prisons (BOP), the process of reentry begins the day an inmate is incarcerated, and generally should continue after an inmate is released. BOP considers reentry to be a high priority and includes it in its mission: “The mission of the BOP is to protect society by confining offenders in the controlled environments of prisons and community-based facilities that are safe, humane, cost-efficient, and appropriately secure, and to provide inmates with a range of work and other self-improvement programs that will help them adopt a crime-free lifestyle upon their return to the community.” BOP estimated that about $640 million of its $6.6 billion fiscal year 2012 operating budget is dedicated to reentry activities. According to BOP officials, the estimate is based on the costs of larger programs that specifically support reentry, such as education and vocational training initiatives and drug treatment programs. But officials stated that because reentry is a process and not a specific program, some initiatives that support reentry would not be captured in this estimate. For example, at a minimum, all BOP institutions offer the General Equivalency Diploma or English as a Second Language programs and therefore BOP included the costs of these programs as part of its reentry activities budget. However, the estimate does not include BOP-sponsored activities that are relevant to reentry that may be held on more of an ad hoc basis at individual BOP institutions. As we reported in September 2012, according to BOP officials, growth in the inmate population has led to increased waiting lists for programs. For instance, as of the end of fiscal year 2011, about 2,400 inmates in male medium security institutions participated in residential drug treatment, almost 3,000 more inmates were on the waiting list to participate, and the average wait for enrollment exceeded 3 months. Table 8 illustrates the variety of reentry-related programs BOP provides. Further, BOP developed a plan in 2011 to implement the Inmate Skills Development Initiative. Through this initiative, BOP intends to measure skills inmates acquired through effective programs with the goal of reducing rates of recidivism. Once fully implemented, the process will involve identifying inmate strengths and weaknesses using a standardized assessment tool, linking programs used to identify specific deficit areas, and tracking the inmates’ progress on their individualized plans throughout incarceration. According to BOP officials, correctional facilities are currently utilizing an assessment tool to measure inmates’ skills, and consider the initiative’s plan to be a living document that they will continue to update and improve. In 2010, we reported on BOP’s progress in implementing the Inmate Skills Development Initiative.report, we recommended that BOP develop a plan for implementing the initiative that includes key tasks, responsibilities and timelines, as well as a comprehensive cost estimate. BOP has since taken actions to implement these recommendations. The Attorney General convened the Federal Interagency Reentry Council for its first meeting on January 5, 2011. At that meeting, the council adopted a mission statement to (1) make communities safer by reducing recidivism and victimization, (2) assist those returning from prison and jail in becoming productive citizens, and (3) save taxpayer dollars by lowering the direct and collateral costs of incarceration. In addition, the council developed the following goals: identify research and evidence-based practices, policies, and programs that advance the council’s mission around prisoner reentry and community safety; identify federal policy opportunities and barriers to improve outcomes for the reentry population; promote federal statutory, policy, and practice changes that focus on reducing crime and improving the well-being of formerly incarcerated individuals, their families, and communities; identify and support initiatives in the areas of education, employment, health, housing, faith, drug treatment, and family and community well- being that can contribute to successful outcomes for formerly incarcerated individuals; leverage resources across agencies that support this population in becoming productive citizens, and reducing recidivism and victimization; and coordinate messaging and communications about prisoner reentry and the administration’s response to it. According to the council, reentry is not only a public safety issue, but it also involves a variety of other issues, as shown in figure 2. To address this wide range of issues, at its first meeting, the council developed a number of short-term issues on which to focus. These included providing visibility and transparency to federal reentry programs and policies, coordinating and leveraging federal resources for reentry, and removing federal barriers to reentry. Council working group members, who currently represent 20 federal agencies, reported in 2011 and 2012 accomplishing several activities to achieve these short-term goals, some of which are highlighted in table 9. Table 10 provides summary information about the Department of Justice and Department of Labor Second Chance Act (SCA) and Re-integration of Ex-offenders (RExO) evaluations of grant programs that support adult reentry services. In addition to the contacts named above, Joy Booth, Assistant Director; Tracey Cross; Justin Dunleavy; David Alexander; Billy Commons, III; Katherine Davis; and Eric Hauswirth made key contributions to this report. | About 700,000 inmates are released from federal and state custody each year, and another 9 million are booked into and released from local jails. Former inmates face challenges as they transition into, or reenter, society, such as finding housing and employment. According to the most recent data available, more than two-thirds of state prisoners are rearrested for a new offense within 3 years of release, and about half are reincarcerated. Federal reentry grants are available for state and local providers, as successful reentry reduces rearrest or reincarceration, known as recidivism. GAO was asked to review (1) the extent to which there is fragmentation, overlap, and duplication across federal reentry grant programs; (2) the coordination efforts federal grant-making agencies have taken to prevent unnecessary duplication and share promising practices; and (3) the extent to which federal grant-making agencies measure grantees effectiveness in reducing recidivism. GAO identified and analyzed the grant programs and agencies that supported reentry efforts in fiscal year 2011; analyzed agency documents, such as grant solicitations; and interviewed agency officials. In fiscal year 2011, the Departments of Justice (DOJ), Labor (Labor), and Health and Human Services (HHS) separately administered nine fragmented but minimally overlapping reentry grant programs with low risk of duplication. Specifically, GAO found that these grant programs are fragmented since more than one federal agency is involved in administering the programs. Further, GAO found that overlap across the nine programs was minimal because the programs varied in (1) their applicant eligibility criteria, (2) the extent to which their funds solely benefit the reentry population, and (3) their primary services funded. For example, Labor's reentry program limits eligibility to private, nonprofit organizations that will use the funds primarily to assist current or former inmates--residing in or released from any facility--with their employment needs. In contrast, one of DOJ's reentry programs limits eligibility to governmental entities that will use the funds primarily to assist current or former inmates--residing in or released from state, local, or tribal facilities--with their substance abuse treatment needs. Given the variance across eligible applicants, beneficiaries, and primary services, the overlap across the nine programs is minimal and the risk of duplication--when two or more agencies or programs are engaged in the same activities, provide the same services to the same beneficiaries, or provide funding for the same purpose--is low. DOJ, Labor, and HHS have acknowledged where some overlap exists and therefore have taken steps to coordinate their reentry efforts to further prevent unnecessary duplication and share promising practices. For example, in 2011, the U.S. Attorney General convened the Federal Interagency Reentry Council--a group of federal agencies whose mission is to make communities safer; assist those returning from prison and jail in becoming productive, taxpaying citizens; and save taxpayer dollars by lowering the direct and collateral costs of incarceration. Further, agency officials from all three agencies reported that they share grant solicitations with one another before issuing them, and in 2009, DOJ and HHS established a memorandum of agreement to formally coordinate funding activities related to reentry. In addition, all three agencies have taken action, or have actions under way, to require their grant applicants to report other federal funds they are receiving, or plan to receive, and consider this information before they will make new award decisions. DOJ, Labor, and HHS are measuring grantee performance and conducting program evaluations, but they could enhance information sharing about the methods they use to collect and analyze data to determine how effectively grantees reduce recidivism. To monitor grantee performance, DOJ, Labor, and HHS collect different performance information, such as rearrest, reincarceration, and employment rates, through several web-based grant management systems, each with varying strengths and limitations. However, the agencies have not formally discussed these systems with one another, or how they analyze the data they collect, despite engaging in collaborations during which such discussions would be practical and useful. Consistent with effective interagency coordination practices, sharing information like this could help the agencies better leverage existing practices and improve their approaches to determining and reporting on grantee effectiveness. GAO recommends that DOJ, Labor, and HHS enhance their information sharing on approaches for determining how effectively grantees reduce recidivism. In response, DOJ, Labor, and HHS reported that they would take actions to address our recommendation. |
Through its development and use of PART, OMB has more explicitly infused performance information into the budget formulation process; increased the attention paid to evaluation and to performance information; and ultimately, we hope, increased the value of this information to decision makers and other stakeholders. By linking performance information to the budget process, OMB has provided agencies with a powerful incentive for improving both the quality and availability of performance information. The level of effort and involvement by senior OMB officials and staff clearly signals the importance of this strategy in meeting the priorities outlined in the PMA. OMB should be credited with opening up for scrutiny—and potential criticism—its review of key areas of federal program performance and then making its assessments available to a potentially wider audience through its Web site. As OMB and others recognize, performance is not the only factor in funding decisions. Determining priorities—including funding priorities—is a function of competing values and interests. Accordingly, we found that while PART scores were generally positively related to proposed funding changes in discretionary programs, the scores did not automatically determine funding changes. That is, for some programs rated “effective” or “moderately effective” OMB recommended funding decreases, while for several programs judged to be “ineffective” OMB recommended additional funding in the President’s budget request with which to implement changes. In fact, the more important role of PART was not its use in making resource decisions, but in its support for recommendations to improve program design, assessment, and management. As shown in figure 1, we found that 82 percent of PART’s recommendations addressed program assessment, design, and management issues; only 18 percent of the recommendations had a direct link to funding matters. OMB’s ability to use PART to identify and address future program improvements and measure progress—a major purpose of PART—depends on its ability to oversee the implementation of PART recommendations. As OMB has recognized, following through on these recommendations is essential for improving program performance and ensuring accountability. Currently, OMB plans to assess an additional 20 percent of all federal programs annually. As the number of recommendations from previous years’ evaluations grows, a system for monitoring their implementation will become more critical. However, OMB does not have a centralized system to oversee the implementation of such recommendations or evaluate their effectiveness. The goal of PART is to evaluate programs systematically, consistently, and transparently. OMB went to great lengths to encourage consistent application of PART in the evaluation of government programs, including pilot testing the instrument, issuing detailed guidance, and conducting consistency reviews. Although there is undoubtedly room for continued improvement, any tool is inherently limited in providing a single performance answer or judgment on complex federal programs with multiple goals. Performance measurement challenges in evaluating complex federal programs make it difficult to meaningfully interpret a bottom-line rating. OMB published both a single, bottom-line rating for PART results and individual section scores. It is these latter scores that are potentially more useful for identifying information gaps and program weaknesses. For example, one program that was rated “adequate” overall got high scores for purpose (80 percent) and planning (100 percent), but poor scores in being able to show results (39 percent) and in program management (46 percent). In a case like this, the individual section ratings provided a better understanding of areas needing improvement than the overall rating alone. In addition, bottom-line ratings may force raters to choose among several important, but disparate goals and encourage a determination of program effectiveness even when performance data are unavailable, the quality of those data is uneven, or they convey a mixed message on performance. Any tool that is sophisticated enough to take into account the complexity of the U.S. government will always require some interpretation and judgment. Therefore it is not surprising that OMB staff were not fully consistent in interpreting complex questions about agency goals and results. In addition, the limited availability of credible evidence on program results also constrained OMB’s ability to use PART to rate programs’ effectiveness. Many PART questions contain subjective terms that are open to interpretation. Examples include terminology such as “ambitious” in describing sought-after performance measures. Because the appropriateness of a performance measure depends on the program’s purpose, and because program purposes can vary immensely, an ambitious goal for one program might be unrealistic for a similar but more narrowly defined program. Without further guidance, it is unclear how OMB staff can be expected to be consistent. We found inconsistencies in how the definition of acceptable performance measures was applied. Our review surfaced several instances in which OMB staff inconsistently defined appropriate measures—outcome versus output—for programs. Agency officials also told us that OMB staff used different standards to define measures as outcome-oriented. Outputs are the products and services delivered by the program whereas outcomes refer to the results of outputs. For example, in the employment and training area, OMB accepted short-term outcomes, such as obtaining high school diplomas or employment, as a proxy for long-term goals for the Department of Health and Human Services’ Refugee Assistance program, which aims to help refugees attain economic self-sufficiency as soon as possible. However, OMB did not accept the same employment rate measure as a proxy for long-term goals for the Department of Education’s Vocational Rehabilitation program because it had not set long-term targets beyond a couple of years. In other words, although neither program contained long- term outcomes, such as participants gaining economic self-sufficiency, OMB accepted short-term outcomes in one instance but not the other. The yes/no format employed throughout most of the PART questionnaire resulted in oversimplified answers to some questions. Although OMB believes it helped standardization, the yes/no format was particularly troublesome for questions containing multiple criteria for a “yes” answer. Agency officials have commented that the yes/no format is a crude reflection of reality, in which progress in planning, management, or results is more likely to resemble a continuum than an on/off switch. We found several instances in which some OMB staff gave a “yes” answer for successfully achieving some but not all of the multiple criteria, while others gave a “no” answer when presented with a similar situation. For example, OMB judged the Department of the Interior’s (DOI) Water Reuse and Recycling program “no” on whether a program has a limited number of ambitious, long-term performance goals, noting that although DOI set a long-term goal of 500,000 acre-feet per year of reclaimed water, it failed to establish a time frame for when it would reach the target. However, OMB judged the Department of Agriculture’s and DOI’s Wildland Fire programs “yes” on this question even though the programs’ long-term goals of improved conditions in high-priority forest acres are not accompanied by specific time frames. The lack of program performance information also creates challenges in effectively measuring program performance. According to OMB, about half of the programs assessed for fiscal year 2004 lacked “specific, ambitious long-term performance goals that focus on outcomes” and nearly 40 percent lacked sufficient “independent, quality evaluations.” Nearly 50 percent of programs assessed for fiscal year 2004 received ratings of “results not demonstrated” because OMB decided that program performance information, performance goals, or both were insufficient or inadequate. While the validity of these assessments may be subject to interpretation and debate, our previous work has raised concerns about the capacity of federal agencies to produce evaluations of program effectiveness as well as credible data. PART was designed for and is used in the executive branch budget preparation and review process. As a result, the goals and measures used in PART must meet OMB’s needs. By comparison, GPRA—the current statutory framework for strategic planning and reporting—is a broader process involving the development of strategic and performance goals and objectives to be reported in strategic and annual plans and reports. OMB said that GPRA plans were organized at too high a level to be meaningful for program-level budget analysis and management review. OMB acknowledges that GPRA was the starting point for PART, but as I will explain, it appears that OMB’s emphasis is shifting such that over time the performance measures developed for PART and used in the budget process may also come to drive agencies’ strategic planning processes. The fiscal year 2004 PART process came to be a parallel competing structure to the GPRA framework as a result of OMB’s desire to collect performance data that better align with budget decision units. OMB’s most recent Circular A-11 guidance clearly requires both that each agency submit a performance budget for fiscal year 2005 and that this should replace the annual GPRA performance plan. These performance budgets are to include information from the PART assessments, where available, including all performance goals used in the assessment of program performance done under the PART process. Until all programs have been assessed using PART, the performance budget will also include performance goals for agency programs that have not yet been assessed. OMB’s movement from GPRA to PART is further evident in the fiscal year 2005 PART guidance stating that while existing GPRA performance goals may be a starting point during the development of PART performance goals, the GPRA goals in agency GPRA documents are to be revised, as needed, to reflect OMB’s instructions for developing the PART performance goals. Lastly, this same guidance states that GPRA plans should be revised to include any new performance measures used in PART and that unnecessary measures should be deleted from GPRA plans. Although there is potential for complementary approaches to GPRA and PART, the following examples clearly illustrate the importance of carefully considering the implications of selecting a unit of analysis, including its impact on the availability of performance data. They also reveal some of the unresolved tensions between the President’s budget and performance initiative—a detailed budget perspective—and GPRA—a more strategic planning view. Experience with the PART highlighted the fact that defining a “unit of analysis” useful for both program-level budget analysis and agency planning purposes can be difficult. For example, disaggregating programs for PART purposes could ignore the interdependence of programs recognized by GPRA by artificially isolating programs from the larger contexts in which they operate. Agency officials described one program assessed with the PART—Projects for Assistance in Transition from Homelessness—that was aimed at a specific aspect of homelessness, that is, referring persons with emergency needs to other agencies for housing and needed services. OMB staff wanted the agency to produce long-term outcome measures for this program to support the PART review process. Agency officials argued that chronically homeless people require many services, and that this federal program often supports only some of the services needed at the initial stages of intervention. GPRA—with its focus on assessing the relative contributions of related programs to broader goals—is better designed to consider crosscutting strategies to achieve common goals. Federal programs cannot be assessed in isolation. Performance needs also to be examined from an integrated, strategic perspective. One way of improving the links between PART and GPRA would be to develop a more strategic approach to selecting and prioritizing areas for assessment under the PART process. Targeting PART assessments based on such factors as the relative priorities, costs, and risks associated with related clusters of programs and activities addressing common strategic and performance goals not only could help ration scarce analytic resources but also could focus decision makers’ attention on the most pressing policy and program issues. Moreover, such an approach could facilitate the use of PART assessments to review the relative contributions of similar programs to common or crosscutting goals and outcomes established through the GPRA process. We have previously reported that stakeholder involvement appears critical for getting consensus on goals and measures. In fact, GPRA requires agencies to consult with Congress and solicit the views of other stakeholders as they develop their strategic plans. Stakeholder involvement can be particularly important for federal agencies because they operate in a complex political environment in which legislative mandates are often broadly stated and some stakeholders may strongly disagree about the agency’s mission and goals. The relationship between PART and its process and the broader GPRA strategic planning process is still evolving. As part of the executive branch budget formulation process, PART must clearly serve the President’s interests. Some tension about the amount of stakeholder involvement in the internal deliberations surrounding the development of PART measures and the broader consultations more common to the GPRA strategic planning process is inevitable. Compared to the relatively open-ended GPRA process, any budget formulation process is likely to seem closed. Yet, we must ask whether the broad range of congressional officials with a stake in how programs perform will use PART assessments unless they believe the reviews reflect a consensus about performance goals among a community of interests, target performance issues that are important to them as well as the administration, and are based on an evaluation process that they have confidence in. Similarly, the measures used to demonstrate progress toward a goal, no matter how worthwhile, cannot serve the interests of a single stakeholder or purpose without potentially discouraging use of this information by others. Accordingly, if PART is to be accepted as other than one element in the development of the President’s budget proposal, congressional understanding and acceptance of the tool and analysis will be important. Congress has a number of opportunities to provide its perspective on performance issues and performance goals, such as when it establishes or reauthorizes a new program, during the annual appropriations process, and in its oversight of federal operations. In fact, these processes already reflect GPRA’s influence. Reviews of language in public laws and committee reports show an increasing number of references to GPRA- related provisions. What is missing is a mechanism to systematically coordinate a congressional perspective. In our report, we have suggested steps for both OMB and the Congress to take to strengthen the dialogue between executive officials and congressional stakeholders. We have recommended that OMB reach out to key congressional committees early in the PART selection process to gain insight about which program areas and performance issues congressional officials consider warrant PART review. Engaging Congress early in the process may help target reviews with an eye toward those areas most likely to be on the agenda of the Congress, thereby better ensuring the use of performance assessments in resource allocation processes throughout government. We have also suggested that Congress consider the need to develop a more systematic vehicle for communicating its top performance concerns and priorities; develop a more structured oversight agenda to prompt a more coordinated congressional perspective on crosscutting performance issues; and use this agenda to inform its authorization, appropriations, and oversight processes. The PART process is the latest initiative in a long-standing series of reforms undertaken to improve the link between performance information and budget decisions. Although each of the initiatives of the past appears to have met with an early demise, in fact, subsequent reforms were strengthened by building on the legacy left by their predecessors. Prior reforms often failed because they were not relevant to resource allocation and other decision making processes, thereby eroding the incentives for federal agencies to improve their planning, data, and evaluations. Unlike many of those past initiatives, GPRA has been sustained since its passage 10 years ago, and evidence exists that it has become more relevant than its predecessors. PART offers the potential to build on the infrastructure of performance plans and information ushered in by GPRA and the law’s intent to promote the use of these plans in resource allocation decision making. GPRA improved the supply of plans and information, while PART can prompt greater demand for this information by decision makers. Potentially, enhancing interest and use may bring about greater incentives by agencies to devote scarce resources to improving their information and evaluations of federal programs as well. Increasing the use and usefulness of performance data is not only important to sustain performance management reforms, but to improve the processes of decision making and governance. Many in the U.S. believe there is a need to establish a comprehensive portfolio of key national performance indicators. This will raise complex issues ranging from agreement on performance areas and indicators to getting and sharing reliable information for public planning, decision making, and accountability. In this regard, the entire agenda of management reform at the federal level has been focused on shifting decision making and agency management from process to results. Although the PART is based on changing the orientation of budgeting, other initiatives championed by Congress and embodied in the PMA are also devoted to improving the accountability for performance goals in agency human capital management, financial management, competitive sourcing, and other key management areas. In particular, we have reported that human capital—or people—is at the center of any serious change management initiative. Thus, strategic human capital management is at the heart of government transformation. High- performing organizations strengthen the alignment of their GPRA strategic and performance goals with their daily operations. In that regard, performance management systems can be a vital—but currently largely unused tool—to align an organization’s operations with individual day-to- day activities. As we move forward to strengthen government performance and accountability, effective performance management systems can be a strategic tool to drive internal change and achieve desired results. The question now is how to enhance the credibility and use of the PART process as a tool to focus decisions on performance. In our report, we make seven recommendations to OMB and a suggestion to Congress to better support the kind of collaborative approach to performance budgeting that very well may be essential in a separation of powers system like ours. Our suggestions cover several key issues that need to be addressed to strengthen and help sustain the PART process. We recommend that the OMB Director take the following actions: Centrally monitor agency implementation and progress on PART recommendations and report such progress in OMB’s budget submission to Congress. Governmentwide councils may be effective vehicles for assisting OMB in these efforts. Continue to improve the PART guidance by (1) expanding the discussion of how the unit of analysis is to be determined to include trade-offs made when defining a unit of analysis, implications of how the unit of analysis is defined, or both; (2) clarifying when output versus outcome measures are acceptable; and (3) better defining an “independent, quality evaluation.” Clarify OMB’s expectations to agencies regarding the allocation of scarce evaluation resources among programs, the timing of such evaluations, as well as the evaluation strategies it wants for the PART, and consider using internal agency evaluations as evidence on a case-by- case basis—whether conducted by agencies, contractors, or other parties. Reconsider plans for 100 percent coverage of federal programs and, instead, target for review a significant percentage of major and meaningful government programs based on such factors as the relative priorities, costs, and risks associated with related clusters of programs and activities. Maximize the opportunity to review similar programs or activities in the same year to facilitate comparisons and trade-offs. Attempt to generate, early in the PART process, an ongoing, meaningful dialogue with congressional appropriations, authorization, and oversight committees about what they consider to be the most important performance issues and program areas that warrant review. Seek to achieve the greatest benefit from both GPRA and PART by articulating and implementing an integrated, complementary relationship between the two. In its comments on our report, OMB outlined actions it is taking to address several of these recommendations, including refining the process for monitoring agencies’ progress in implementing the PART recommendations, seeking opportunities for dialogue with Congress on agencies’ performance, and continuing to improve executive branch implementation of GPRA plans and reports. Our recommendations to OMB are partly directed at fortifying and enhancing the credibility of the PART itself and the underlying data used to make the judgments. Decision makers across government are more likely to rely on PART data and assessments if the underlying information and the rating process are perceived as being credible, systematic, and consistent. Enhanced OMB guidance and improved strategies for obtaining and evaluating program performance data are vital elements. The PART process can be made more sustainable if the use of analytic resources at OMB and the agencies is rationalized by reconsidering the goal of 100 percent coverage of all federal programs. Instead, we suggest a more strategic approach to target assessments on related clusters of programs and activities. A more targeted approach stands a better chance of capturing the interest of decision makers throughout the process by focusing their attention on the most pressing policy and program issues on how related programs and tools affect broader crosscutting outcomes and goals. Unfortunately, the governmentwide performance plan required by GPRA has never been engaged to drive budgeting in this way. Improving the integration of inherently separate but interrelated strategic planning and performance budgeting processes can help support a more strategic focus for PART assessments. GPRA’s strategic planning goals could be used to anchor the selection and review of programs by providing a foundation to assess the relative contribution of related programs and tools to broader performance goals and outcomes. Finally, refining the PART questionnaire and review process, and improving the quality of data are important, but the question of whose interests drive the process is perhaps paramount in our system. Ultimately, the impact of PART on decision making will be a function not only of the President’s decisions, but of congressional decisions as well. Much is at stake in the development of a collaborative performance budgeting process. Not only might the PART reviews come to be disregarded absent congressional involvement, but more important, Congress will lose an opportunity to use the PART process to improve its own decision making and oversight processes. This is an opportune time for the executive branch and Congress to carefully consider how agencies and committees can best take advantage of and leverage the new information and perspectives coming from the reform agenda under way in the executive branch. Ultimately, the specific approach or process is not important. We face a long-term fiscal imbalance, which will require us to reexamine our existing policies and programs. It is all too easy to accept “the base” as given and to subject only new proposals to scrutiny and analysis. The norm should be to reconsider the relevance or “fit” of any federal program, policy, or activity in today’s world and for the future. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions you or the other members of the Committee may have at this time. For future contacts regarding this testimony, please call Paul L. Posner, Managing Director, Federal Budget Issues, at (202) 512-9573. Individuals making key contributions to this testimony included Denise M. Fantone and Jacqueline Nowicki. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Office of Management and Budget's (OMB) Program Assessment Rating Tool (PART) is meant to provide a consistent approach to evaluating federal programs during budget formulation. The subcommittee asked GAO to discuss our recent report, Performance Budgeting: Observations on the Use of OMB's Program Assessment Rating Tool for the Fiscal 2004 Budget (GAO-04-174) and strategies for improving PART and furthering the goals envisioned by the Government Performance and Results Act of 1993 (GPRA). PART helped structure OMB's use of performance information for internal program and budget analysis and stimulated agency interest in budget and performance integration. Moreover, it illustrated the potential to build on GPRA's foundation to more actively promote the use of performance information in budget decisions. OMB deserves credit for inviting scrutiny of its federal program performance reviews and sharing them on its Web site. Much of PART's potential value lies in its program recommendations but follow through will require sustained commitment by agencies and OMB. OMB devoted considerable effort to developing PART, but diagnosing problems and rating programs are only the beginning of PART's ambitious agenda. Implementing change and providing oversight takes time; OMB needs to be mindful of this as it considers capacity and workload issues in the PART. As is to be expected in the first year of any reform, PART is a work in progress and we noted in our report where OMB might make improvements. Any tool that is sophisticated enough to take into account the complexity of the U.S. government will require exercising some judgment. Therefore it is not surprising that we found inconsistencies in OMB staff interpreting and applying PART. PART provides an opportunity to more efficiently use scarce analytic resources, to focus decision makers' attention on the most pressing policy issues, and to consider comparisons and trade-offs among related programs by more strategically targeting PART assessments based on such factors as the relative priorities, costs, and risks associated with related clusters of programs and activities. PART assessments underscored long-standing gaps in performance and evaluation information throughout the federal government. By reaching agreement on areas in which evaluations are most essential, decision makers can help ensure that limited resources are applied wisely. The relationship between PART and the broader GPRA strategic planning process is still evolving. Although PART can stimulate discussion on program-specific performance measurement issues, it is not a substitute for GPRA's strategic, longer-term focus on thematic goals, and department- and governmentwide crosscutting comparisons. Although PART and GPRA serve different needs, a strategy for integrating the two could help strengthen both. Federal programs are designed and implemented in dynamic environments where competing program priorities and stakeholders' needs must be balanced continually and new needs addressed. PART clearly serves OMB's needs but questions remain about whether it serves the various needs of other key stakeholders. If PART results are to be considered in the congressional debate it will be important for OMB to (1) involve congressional stakeholders early in providing input on the focus of the assessments; (2) clarify any significant limitations in the assessments and underlying performance information; and (3) initiate discussions with key congressional committees about how they can best leverage PART information in congressional authorization, appropriations, and oversight processes. |
In the Veterans Entrepreneurship and Small Business Development Act of 1999, as amended, Congress established various programmatic requirements for The Veterans Corporation to address perceived shortfalls in federally provided services for veterans. For example, The Veterans Corporation is to (1) expand the provision of and improve access to technical assistance regarding entrepreneurship; (2) assist veterans with the formation and expansion of small businesses by working with and organizing public and private resources; (3) establish and maintain a network of information and assistance centers for use by veterans; (4) establish a Professional Certification Advisory Board (PCAB) to create uniform guidelines and standards for the professional certification of members of the armed services; and (5) assume the duties, responsibilities, and authority of the Advisory Committee on Veterans Business Affairs, a body created by the Act, by October 1, 2004. The Veterans Corporation is a nonprofit corporation chartered in the District of Columbia and according to its enabling legislation has authority to solicit, receive, and disburse funds from private, federal, state, and local organizations. To fund The Veterans Corporation, Congress authorized $12 million in federal appropriations over 4 fiscal years—$4 million in each of the first 2 years and $2 million in each of the following 2 years—with the expectation that The Veterans Corporation would become financially self- sufficient. The Veterans Corporation received its first appropriation in March 2001. The Act also contains a matching funds provision, applicable to fiscal years 2002 through 2004, that limits the availability of appropriated funds to amounts The Veterans Corporation certifies it will provide from sources other than the federal government. For fiscal year 2002, the amount of appropriated funds made available to The Veterans Corporation was limited to not more than twice the amount the corporation certified it would provide for that fiscal year from sources other than the federal government. In effect, The Veterans Corporation received $2 in federal appropriations for every $1 it raised from nonfederal sources. For the remaining 2 fiscal years, the matching requirement was on a dollar-for- dollar basis. The Veterans Corporation has continued to add, refocus, and expand its programs and services to veterans to better respond to its entrepreneurial assistance and training mandates under the Act. The VET program has remained, however, at the center of the organization’s business assistance and training efforts. Additionally, The Veterans Corporation’s services generally complemented federal programs in that they were either targeted specifically to veterans or not offered elsewhere. Although in one instance, the efforts of The Veterans Corporation and VA to create separate veterans business directories appeared to be largely duplicative. According to an official at The Veterans Corporation, collaboration with federal agencies has improved as evidenced by the mutually beneficial ventures it has undertaken with these agencies. However, The Veterans Corporation faces several challenges, described in our previous report, that still impede its progress in fulfilling other mandates under the Act. They include lack of staff and funding for the PCAB and the question of whether the PCAB mission can be accommodated within The Veterans Corporation, identification of veteran-owned businesses, and how to address its legal status concerns. At the time of our last report, many of The Veterans Corporation entrepreneurial services, such as a microloan program, business insurance, and online buying and selling of veteran-owned goods and services, had started. Since then, The Veterans Corporation has expanded or added several services primarily in the areas of finance, accounting, and contracting-related opportunities: Veterans Marketplace. The Veterans Corporation has pilot efforts under way to allow veterans to sell goods and services online to a VA hospital and a school district. Also, since our last report, The Veterans Corporation has expanded this program to include a separate trading directory in which veteran-owned businesses can be listed as suppliers within a private business directory. The directory is managed by The Veterans Corporation’s partnering organization, Perfect Commerce. Veterans Small Business Finance Program. The Veterans Corporation has partnered with Newtek Small Business Finance, Inc., a provider of loans and other financing options to small businesses throughout the United States. Newtek provides loans (ranging from $50,000 to $2 million) at discounted rates from the lender’s normal loan rates and assists in making SBA loan guarantees available to qualified veteran businesses nationwide. Loan applications are completed electronically through The Veterans Corporation’s Web site. As of June 30, 2004, The Veterans Corporation helped create 473 applications and 9 SBA- approved loans. Accounting and Tax Services. The Veterans Corporation is partnering with Newtek Business Services, Inc., to offer services such as bill payment, periodic financial statements and reports, and tax filing and planning services for veteran-owned small businesses at a discount. Merchant Processing Services. In partnership with Newtek Merchant Solutions, The Veterans Corporation is offering credit card and debit card processing and check verification for veteran-owned small businesses. Veterans Corporation Business Directory. In May 2003, The Veterans Corporation created this directory with assistance from SBA to help veteran-owned businesses and businesses owned by Army Reserve and National Guard service members advertise their services. The directory contains information such as company profiles and is available on the Internet to anyone interested in working with veteran-owned businesses. The directory listed approximately 2,668 businesses as of June 30, 2004. Veterans Pipeline. In partnership with ePipeline, this subscription service targets veteran small business owners interested in federal or state contracting opportunities. Veteran subscribers may access research on more than 7,000 contracting opportunities, including information on who received prior awards and subcontracts. Veterans Purchase Net. The Veterans Corporation put in place a bid- and-response system for buyers and sellers of products or services in partnership with Diversity Vendors, Inc. The sellers receive nightly e- mails related to contracting opportunities; the sellers can then submit bids to compete for these contracts. This is a subscription service for veteran-owned small businesses. While The Veterans Corporation has expanded and added programs, the VET program has continued to be the focal point of its efforts. As mentioned in our earlier report, the program is a partnership with the Ewing Marion Kauffman Foundation’s FastTrac Program, a successful entrepreneurship-training program. The VET program incorporates classroom instruction, mentoring, networking, and technology training. Officials at The Veterans Corporation told us that the VET program was their most successful effort to date. Since our last report, The Veterans Corporation has expanded the number of class sites and locations for its three VET courses: (1) The New Venture, which focuses on starting a business; (2) The Planning Program, which focuses on expanding a business; and (3) Listening to Your Business, a seminar which focuses on assessing the health and market share of an existing business. In fiscal year 2003, The Veterans Corporation hosted 33 courses in eight states; 506 veterans participated in the program. Of the 506 participants, 458 graduated in fiscal year 2003, which included 77 veterans that graduated from VET courses held at local SBA-sponsored Small Business Development Centers. These centers provide one-stop management and technical assistance to individuals and small businesses at locations such as colleges and universities. Additionally, some programs and services were still evolving to better address Veterans Corporation mandates. For example, the organization has refocused the Veterans Business Success Seminars, mentioned in our previous report, to use community-based organizations (CBO), to help veterans with training and services not currently available to veteran entrepreneurs. A Veterans Corporation official stated that this new strategy more consistently meets the mandate to establish and maintain a network of information and assistance centers that veterans and the public can use. However, the refocused program, now called the National Veterans Community-Based Organization Initiative, was still in its early stages. An official at The Veterans Corporation stated that the purpose of the program is to direct veterans to existing local service providers that offer assistance to small businesses and identify and address gaps in service. For example, The Veterans Corporation initially funded two CBOs, the Veteran Advocacy Foundation in St. Louis, Missouri, and the American Veterans Coalition in San Francisco, California, to survey the extent of entrepreneurial services to veterans in their communities. After completing its survey, the St. Louis CBO also received additional funding to implement its plan to enhance entrepreneurial opportunities for veterans. A third CBO, Robert Morris University in Pittsburgh, Pennsylvania, was also funded to survey the availability of local business services to veterans in its community. According to officials at The Veterans Corporation, they were also developing another program intended to address their mandate to organize public and private resources to assist veterans with the formation and expansion of small businesses. Officials first met to discuss the new program, the National Veterans Entrepreneurial Education Initiative, in April 2004. The intent of this effort is to leverage resources from the private and public sectors to coordinate and focus (at a national level) entrepreneurial education and educational assistance. The initial participants included, but were not limited to, VA, SBA, and Small Business Development Centers. The Veterans Corporation board chair explained that besides education, other elements of this initiative might include mentoring, counseling, and early-stage and advanced small business training. At the time of our review, The Veterans Corporation was developing a detailed concept paper on this initiative. Figure 1 shows the status of key initiatives that The Veterans Corporation has undertaken. Additionally, appendix II lists activities that address statutory requirements under the Act (Pub. L. No. 106-50). The services offered by The Veterans Corporation generally complemented services offered by federal agencies, including VA and SBA. As noted in our previous report, Veterans Corporation officials said that they have been careful not to duplicate existing services. Most of The Veterans Corporation’s programs were intended to fill gaps in federal services by offering services that were either targeted specifically to veterans or unavailable elsewhere. For example, the VET program provides small business training to veterans. Although such training is widely available in both the public and private sectors, The Veterans Corporation’s program is unique because it limits enrollment to veterans and their spouses, subsidizes course fees for veterans, and tailors curricula to the needs and experiences of veterans. The Veterans Corporation’s small business finance program is another program that complements existing federal programs and efforts. While SBA provides loan guarantees for many small business owners, including veterans, veterans generally receive no discounts from their lenders when qualifying for this financing. In contrast, The Veterans Corporation collaborates with a private lender to obtain reduced loan rates for veterans on SBA-guaranteed loans. We were unable to identify any other loan program specifically targeting veterans in this manner. We also identified several instances of collaboration between The Veterans Corporation and federal agencies that serve veterans. Although, as indicated in our previous report, such collaboration was limited, an official at The Veterans Corporation explained that collaboration has improved since then to include mutually beneficial ventures with federal agencies that also provides veteran entrepreneurial services. For example, SBA’s Office of Veterans Business Development (OVBD) provided The Veterans Corporation with support for the VET and CBO programs, including assistance with program design and a $45,000 grant to help initiate the VET program in three pilot locations. These activities support the missions of both The Veterans Corporation and OVBD to assist veterans. Furthermore, The Veterans Corporation entered into an agreement with SBA’s OVBD and a Florida Small Business Development Center to provide resources in exchange for the opportunity to promote its services to veterans that complete the course. The Veterans Corporation was also collaborating with CVE to link their online veteran-owned business directories. Thus, veterans who register for one directory would access the registration page of the other directory by clicking a single button. While the work of The Veterans Corporation generally has complemented the work of federal agencies, based on our analysis it appeared that there was substantial duplication between The Veterans Corporation’s online Business Directory and CVE’s online business directory, the Vendor Information Pages. The directories both (1) aid federal and private-sector purchasing agents by identifying the goods and services offered by veteran- owned and service-disabled veteran-owned businesses; (2) are of similar size, were developed from similar information sources, and employ similar methods to identify and register veteran-owned businesses on their sites; and (3) actively seek the agreement of purchasing agents or prime contractors to use the directory before using any other source of information about veteran-owned businesses. While officials from The Veterans Corporation and VA acknowledged that their directories were similar, they did not believe they were in competition. According to these officials, some differences exist between the two directories, including different registration requirements and different emphases on public versus private purchasing. Differences we identified are listed in figure 2. Additionally, each agency had a different motivation for creating its directory. CVE built its database to fulfill a mandate to provide federal agencies with information about service-disabled veteran-owned small businesses. The Veterans Corporation’s primary motive was to provide a service that would attract veteran entrepreneurs to whom The Veterans Corporation could later market its products and services. Neither agency believes that its databases could be merged because The Veterans Corporation markets its services to its database members, and CVE is prohibited from releasing information for this purpose. As mentioned previously, The Veterans Corporation and CVE have links on both their Web sites to encourage veterans to sign up for both databases. In our earlier report, we noted several challenges that hindered The Veterans Corporation from fulfilling its mandates under the Act. They included achieving the complex goals derived from the PCAB’s mission, identifying the veteran-owned business population, and clarifying the unclear legal status of The Veterans Corporation. According to Veterans Corporation officials, the organization still faces these challenges. In our earlier report, The Veterans Corporation officials expressed their opinion that the PCAB would be more appropriately led by another entity and that The Veterans Corporation had not been provided the funding or authority to achieve the PCAB mandates. During our review, The Veterans Corporation officials reiterated these same concerns. The PCAB chair pointed out that the mandate to address certification and licensing guidelines and barriers did not directly relate to The Veterans Corporation’s core mission to assist veterans with entrepreneurship activities. The PCAB chair also questioned the board’s ability to carry out mandated activities without a paid professional staff. PCAB members serve on a volunteer basis with no operating budget. In response to these concerns, the PCAB drafted an issue paper that provides an overview and assessment of licensure and certification resources, organizations, and tools available to assist active-duty and transitioning military personnel seeking employment. Moreover, the paper provides recommendations to The Veterans Corporation’s board of directors, requesting that Congress eliminate its requirement to create uniform guidelines and standards for the professional certification of armed services members and expand an existing Army certification and licensing effort to the entire Department of Defense (DOD). According to the PCAB chair, the PCAB mission would better fit with the missions of DOL or DOD because these two departments have been involved in licensing and certification issues. Moreover, the two departments signed a memorandum of understanding in July 2003 that included a provision to promote cooperative efforts relating to licensing and certification issues. We noted in our earlier report that The Veterans Corporation experienced difficulties in obtaining information from government sources on military personnel transitioning to civilian life and existing veteran-owned businesses because of privacy law restrictions. As a result, as of June 30, 2004, The Veterans Corporation identified approximately 12,000 veteran- owned and potential veteran businesses, primarily through its Web site or Business Directory registrations. According to Veterans Corporation officials, they would need to identify about 250,000-300,000 veteran business owners to effectively carry out its mission of providing services and achieving self-sufficiency. One of the officials explained that the industry guidance related to its business model (marketing to an affinity group) suggested that at least 250,000 names would be needed to generate desired revenue from commissions based on sales volume. Specifically, based on this model, The Veterans Corporation could expect to generate between $10-$20 for each affinity member—money that would help it achieve its self-sufficiency goals. (We discuss efforts to achieve self- sufficiency in more detail later in this report.) The officials with whom we spoke explained that they have not been able to mount a comprehensive, targeted effort to identify this population. Other efforts to identify veteran-owned businesses have all had limited success. According to these officials, they have tried to identify veterans through (1) advertising in journals aimed at veterans, (2) issuing press releases in newspapers, (3) making public service announcements on television and radio, (4) undertaking speaking engagements at trade associations, (5) participating at government-sponsored small business fairs, and (6) communicating with veterans through The Veterans Corporation’s Web site and its VET program. Additionally, The Veterans Corporation has acquired some information through two other databases, SBA’s Procurement Marketing and Access Network (PRO-Net) and DOD’s Central Contractor Registration (CCR) database. In June 2004, The Veterans Corporation began a marketing effort to identify the veteran-owned business population. It is using a direct marketing firm to reach this population through e-mail, telemarketing, and direct mail. A Veterans Corporation official stated that its initial marketing effort indicated that direct mail was the most effective way of obtaining new members. The Veterans Corporation plans to meet with the direct marketing firm in September 2004 to discuss additional work. During our work on our last report, officials at The Veterans Corporation indicated that questions about the legal status of The Veterans Corporation as either a public agency or private corporation had, at times, complicated organizational and program development efforts. An official at The Veterans Corporation expressed concerns that if The Veterans Corporation were a federal agency, its ability to raise private funds and become self- sustaining, as contemplated in the Act, would be compromised. On the other hand, according to The Veterans Corporation official, federal agencies such as DOD and DOL had not been willing to share nonpublic information and resources (that they could share with other federal agencies) because of concerns that The Veterans Corporation was a private entity. As we discussed in our last report, The Veterans Corporation received conflicting opinions from the Office of Personnel Management (OPM) and private law firms, with OPM concluding that the corporation is a government-controlled corporation subject to most provisions of Title 5 of the United States Code, and the law firms concluding that the corporation is not a government-controlled corporation or executive agency for purposes of certain laws applicable to federal agencies, including provisions of Title 5 and the Federal Acquisition Regulations. In March 2004, the Department of Justice’s Office of Legal Counsel (OLC) issued a memorandum in response to an Office of Management and Budget (OMB) request to further clarify The Veterans Corporation’s legal status. The OLC opinion concluded that The Veterans Corporation is a “government corporation” under 5 U.S.C. § 103 (2000) and an “agency” under 31 U.S.C. § 9102 (2000). According to officials of The Veterans Corporation, based on the OLC opinion, OMB has advised The Veterans Corporation that it must comply with laws, regulations, and guidance applicable to executive branch agencies. The Veterans Corporation officials observed that this would mean, among other things, compliance with OPM personnel reporting and other personnel-related requirements, as well as budget and accounting requirements. Although the external audit did not identify material weaknesses in The Veterans Corporation’s internal controls, The Veterans Corporation lacked some key operational controls. Specifically, based on its fiscal year 2003 external audit, The Veterans Corporation did have internal control deficiencies over financial reporting; however, its external auditor determined that these were not material weaknesses. According to Veterans Corporation officials, the corporation implemented corrective actions for the issues the external auditor identified and also implemented controls to prevent duplicate payments identified by GAO. Additionally, The Veterans Corporation has implemented various controls over its obligation and expenditure payment processes, including limits on the ability of management officials to make check disbursements without board of director approval. However, The Veterans Corporation lacked some important operational controls for its planning and reporting processes. More specifically, although The Veterans Corporation adopted some strategic planning best practices, it did not use others. For instance, the strategic plan generally did not contain outcome-oriented or measurable goals and objectives, which prevented The Veterans Corporation from assessing the effectiveness of its program and services. Additionally, without meaningful performance measures, The Veterans Corporation has been unable to provide Congress, through its annual report, with an assessment of its progress or outcomes of its efforts. According to its external auditor, The Veterans Corporation’s had internal control issues that could have adversely affected its ability to administer a major federal program in accordance with applicable laws, regulations, contracts, and grants. Specifically, the external auditor found in its fiscal year 2003 audit that The Veterans Corporation did not consistently enforce its expense reimbursement policy. The external auditor classified the internal control matter as a reportable condition and did not identify any instances of material weaknesses. This reportable condition was detailed in a letter to management. According to Veterans Corporation officials, they have addressed the reported deficiencies. In our review of The Veterans Corporation fiscal year 2003 expenditures, we found that The Veterans Corporation made two duplicate payments— one payment in the amount of $3,142 and the other in the amount of $8,976 to the same vendor. The amounts were not material to the financial statements. According to The Veterans Corporation officials, The Veterans Corporation has taken steps to prevent future duplicate payments and pursued reimbursement from the vendor. According to The Veterans Corporation officials, in May 2003, the board of directors significantly restructured its operations, responsibilities, and procedures and those of its committees. The board has primary responsibility for the governance of The Veterans Corporation and exercises that governance, both directly and indirectly, through delegation of authority to the chief executive officer (CEO). To accomplish the restructuring, the board passed a series of resolutions, which transferred many of the previous authorities vested in the executive committee to the full board. Among other things, the board amended the CEO’s expense authority based upon a recommendation from the executive committee. Specifically, it authorized the CEO to sign all contracts or expenditures that relate to strategies and business initiatives previously discussed with and approved by the board or executive committee up to and including $100,000. However, the board retained authority to approve new expenditures in excess of $25,000. As we previously reported, the board first established disbursement authorities for executive–level staff in March 2001, but from March 2001 to April 2003, the executive committee was responsible for making time-sensitive decisions on behalf of the board between quarterly board meetings. Additionally, as stated in our previous report, the board resolved that checks written in amounts of $5,000 or less require one authorized signature; those in excess of $5,000 require two authorized signatures. Moreover, both the CEO and the senior vice president were authorized to sign checks. Since the restructuring, the board chair has taken a more active role in the operations of The Veterans Corporation. While day-to-day management is the responsibility of the CEO and management, the board chair meets weekly with the CEO to discuss the corporation’s activities. Additionally, the board reviews monthly activity reports and quarterly financial data such as income statements and balance sheets. We identified weaknesses in The Veterans Corporation’s planning and reporting processes that primarily resulted from a lack of measurable, outcome-oriented performance measures. The Veterans Corporation relied on its strategic plan, an important operational control, for both planning and reporting on its programs. It prepared a 5-year strategic plan, which outlined the corporate mission, goals, and priorities; and an annual business plan, which contained more detailed objectives and action plans for each division within the organization. Additionally, as required by the Act, The Veterans Corporation submitted an annual report to Congress, which was based on its strategic plan. Staff members also prepared periodic reports for the board of directors and a public annual report. We evaluated The Veterans Corporation’s planning and reporting processes according to best practices identified by government and nonprofit strategic planning literature and experts, including the following: The mission statement should identify what makes the organization unique and define the outcomes of its activities. Goals should be aligned with the mission statement, as well as with strategies for achieving goals. Goals and objectives should be measurable and outcome-oriented, rather than process-oriented. Thus, rather than measuring the number of activities, they should measure the end results of activities on the target population. The plan should identify and discuss internal and external factors that could affect the performance of the organization. The plan should be developed in consultation with key stakeholders, including, in this case, Congress. The Veterans Corporation has adopted several of these practices. For instance, its mission statement defined the outcomes of its activities—the formation and expansion of small business concerns by veterans, including service-disabled veterans. The strategic plan also had long-term corporate goals that were aligned to the mission statement. According to The Veterans Corporation officials, these long-term corporate goals established broad board directives for its staff. Furthermore, the strategic plan also contained annual objectives that supported the corporate goals, and the business plan contained action plans to meet these annual objectives. For example, for fiscal year 2004, The Veterans Corporation wanted to increase the number of VET graduates to 750 and have VET programs with certified facilitators and administrators in at least 15 states. The action plan indicated that staff would analyze historical results to date and project the numbers by fiscal year quarters and locations to achieve this objective. Moreover, The Veterans Corporation’s business plan included an analysis of internal and external factors that might affect the performance of the corporation, and officials told us that they regularly consulted with veterans groups and other stakeholders. However, many of the goals and objectives in The Veterans Corporation’s strategic plan were not measurable. Thus, they generally did not define specific methods for measuring success over the short and long term. While a few of its annual objectives had some performance measures in place, such as the target number of participants in the VET program and fund-raising amounts, The Veterans Corporation had only two measurable long-term goals, including achieving financial self-sufficiency. The other goals and many of the objectives lacked performance measures to indicate the expected progress over the 5 years covered by the strategic plan. For example, one fiscal year 2004 objective was to conduct the affairs of The Veterans Corporation in an effective, efficient, and responsible manner. However, the objective did not define how effectiveness or efficiency would be measured. Without measurable goals and objectives, The Veterans Corporation will have difficulty ensuring and demonstrating the success of its programs. Additionally, most of the goals and objectives in The Veterans Corporation’s strategic planning documents were process-oriented, rather than outcome- oriented. Thus, they tended to focus on program outputs and activities, such as the number of veterans receiving training, rather than on their impact on veterans, such as the number of new businesses opened by program participants or the amount of revenue generated by veteran- owned businesses. For example, one goal was to develop and implement programs that provide veterans access to knowledge, tools, and resources necessary to succeed in their entrepreneurial efforts. However, there were no performance measures to gauge how well the programs are providing necessary tools and resources or whether those resources are helping veterans succeed in their businesses. Without outcome-oriented goals, The Veterans Corporation will have difficulty demonstrating that achieving all its goals and objectives would lead to the fulfillment of its mission to assist veteran entrepreneurs. We also reviewed examples of measurable, outcome-oriented performance measures from federal agencies such as the Department of Transportation (DOT) and the Social Security Administration (SSA). For instance, one of DOT’s goals was to reduce highway fatalities to not more than 1.0 per 100 million vehicle-miles traveled by 2008. Similarly, by 2008 SSA intends to increase the number of disability beneficiaries who achieve employment by 50 percent from 2001 levels. Both of these goals focused on the outcomes on citizens rather than the activities of their programs and provide a measurable point of success. The chair of The Veterans Corporation’s board of directors acknowledged that The Veterans Corporation would need to do more work to develop outcome-oriented performance measures. Other officials from The Veterans Corporation noted that many of their programs were too new for outcomes on veteran entrepreneurs to be apparent. However, the strategic plan, which spans fiscal years 2004 to 2008, also has not defined which outcomes will be measured when data become available. At the time of our review, The Veterans Corporation was beginning to collect outcome data for the first three groups of participants in the VET program, including surveys of former participants to determine if they began a new business or expanded their existing business after taking the training. These data should prove helpful in determining the outcomes of this program, but the strategic plan contained no performance measures against which to measure these statistics. Officials from The Veterans Corporation told us that they were monitoring the performance of the affinity programs by counting the number of veterans who signed up for services. They said that service usage was an indication that the services were of value to veterans. The Veterans Corporation’s fiscal year 2003 annual report to Congress consisted of descriptions of programs, along with some data on the growth of programs. It lacked any estimate of the benefits these programs provided to veteran entrepreneurs. Thus, it was output-, rather than outcome-oriented. Also, in reviewing The Veterans Corporation’s annual report to Congress for fiscal year 2003, we found that The Veterans Corporation had not incorporated its plans for, and progress toward, self- sufficiency in this report. (We discuss the self-sufficiency plan in greater detail later in this report.) According to the Government Performance and Results Act, an annual report to Congress should include a clear demonstration of how the corporate goals are aligned to the mission of the organization, the year’s performance targets, whether the targets were met, and explanations and plans for corrective action when targets were not met. Additionally, management provided monthly activity reports to the board of directors to help them fulfill their responsibilities for oversight, guidance, and direction. According to the board chair, the board also used the information contained in the activity reports to inform congressional committees of corporate activities. The chairman added that the activity report format evolved over the last year to also include a snapshot of the cumulative results of several efforts. For example, the July 2004 monthly activity report indicated that since its inception the Veterans Small Business Finance program created 473 applications and had 9 loans approved. Federal appropriations have been The Veterans Corporation’s primary source of funding. The Veterans Corporation used approximately $3.3 million in federal appropriations in fiscal year 2003 to cover expenditures related to paying for salaries and professional services and establishing and operating programs. More specifically, payments for salaries and professional services accounted for 58 percent of its expenditures in that year and program-related activities accounted for the rest. Additionally, as revenue levels from other sources declined, The Veterans Corporation used proportionately more federal money for fiscal year 2003 expenditures. Appendix III provides more detail on The Veterans Corporation’s revenue and expenses for fiscal years 2002 and 2003. During fiscal year 2003, The Veterans Corporation’s primary source of funding was from federal appropriations. As of September 30, 2002, The Veterans Corporation had about $3.3 million of unexpended appropriations available for future spending. Because The Veterans Corporation federal appropriations are provided on a “no year” basis, this amount was carried forward to apply to expenses in future fiscal years. In addition, during fiscal year 2003, The Veterans Corporation received $2 million in appropriations. Of the nearly $5.3 million available, it used approximately $3.3 million of its federal funds. It used approximately $1.9 million (58 percent) of the $3.3 million to pay for salaries and professional services to establish and run programs. Executive salaries at The Veterans Corporation in fiscal year 2003 appeared consistent with the information sources it consulted regarding salaries at other organizations, although these sources did not provide information on comparable organizations. For example, the Veterans Corporation used federal pay schedules and Web sites such as Salary.com, which rarely distinguished between nonprofit and for-profit positions. Additionally, the Internal Revenue Service applies three conditions when evaluating whether nonprofit salaries are reasonable: (1) approval by a board of directors that does not have a conflict of interest with respect to the compensation arrangement, (2) reliance on comparable data such as salary surveys, and (3) adequate documentation of the basis for the determination. Although The Veterans Corporation fulfilled these requirements to some extent, it relied on data that were not entirely comparable and also did not fully document the basis for its decisions. The salary that The Veterans Corporation paid in fiscal year 2003 to the previous CEO was somewhat higher than the range of salaries suggested by its information sources. The salaries of other positions we evaluated, including Director of Information Systems and Program Director, were within the ranges suggested by these sources. Caution is advisable when evaluating appropriateness of salaries paid by The Veterans Corporation due to the variable nature of nonprofits and a lack of data on relevant variables. In conducting this work, we spoke with representatives of the Center on Nonprofits and Philanthropy of the Urban Institute who referred us to a November 2001 study on executive compensation in the nonprofit sector. The study concluded that salary determinations are defined largely by the characteristics and circumstances of individual nonprofits. The study also stated that the variability in nonprofit wages and benefits suggests that generalizations concerning compensation patterns are difficult. To accurately assess the reasonableness of The Veterans Corporation’s executive compensation, it would be necessary to obtain information from organizations of the same type (for example, religion-based versus nonreligion-based), size, activities, and sources of revenue (for example, fees versus donations). As noted previously, the data sources used by The Veterans Corporation were limited in terms of information on comparable organizations. We were also unable to locate reliable sources of data that provided information on the compensation paid by similar organizations that included the duties of the position. The Veterans Corporation has made several recent changes in staffing that should reduce its expenses for executive compensation. In our previous report, we stated that the total compensation, including salary and bonus, paid to The Veterans Corporation’s executive management in fiscal year 2002 was $694,500. In fiscal year 2003, this amount was reduced to $446,488. In fiscal year 2004, The Veterans Corporation eliminated one executive position by consolidating the CEO and chief financial officer positions. As a result, The Veterans Corporation projects that executive compensation will total $310,153 for fiscal year 2004, a reduction of $136,335 from the previous year’s expenditures. In fiscal year 2003, The Veterans Corporation spent about $652,000 for professional services, of which $270,000 went to fund-raising consultants. The primary fund-raising organization for The Veterans Corporation, Changing Our World, received about $208,000. In fiscal year 2003, Changing Our World, along with other fund-raising consultants, raised approximately $258,000 in contributed cash and pledges, which fell significantly short of the $1.3 million goal for fund-raising. At the time of our review, The Veterans Corporation had not renewed its agreement with Changing Our World for fiscal year 2004. The Veterans Corporation officials told us they decided to change their fund-raising strategy from focusing on corporations and foundations to wealthy individuals, ideally veterans themselves. In May 2004, The Veterans Corporation also hired a new staff member to conduct fund-raising. Expenses for program activities related primarily to the VET and Veterans Marketplace programs. Of the $1.4 million spent on program-related activities, approximately $551,000 represented costs for the 506 participants in the VET program. Of the 506 participants, 300 graduated from the program during fiscal year 2003 at a cost of about $1,382 per graduate. Nonrecurring start-up expenses such as consulting, advertising, and staff training accounted for about $114,000 of VET program costs. The Veterans Corporation also provided each VET program graduate with a voucher, good for the purchase of a computer or business tools, which accounted for approximately $217,000 of the program costs. The officials stated that The Veterans Corporation offered the vouchers to veterans in order to be competitive in the market. They added that its per-participant cost was lower than that of other small business courses being offered, yet none of its competitors offered vouchers as a benefit. One official further stated that because participant costs were low, The Veterans Corporation could probably compete without offering the computer vouchers. At the time of our review, The Veterans Corporation was performing a market analysis to determine whether or not to raise the participant’s share of the cost. According to one official, The Veterans Corporation contacted two of its VET program facilitators to discuss increasing the participants’ share of the cost or reducing the amount of the vouchers. As a result of this analysis, management has proposed reducing the amount of the voucher in its fiscal year 2005 budget. A corporation official stated that the board is not expected to vote on the proposed fiscal year 2005 budget until November 2004. The $250,000 expense charged to the Veterans Marketplace program represented a negotiated annual licensing fee paid to Perfect Commerce, its partner organization. A Veterans Corporation official pointed out that this fee would be reduced to $100,000 in fiscal year 2004 and $50,000 in fiscal year 2005, at which time The Veterans Corporation could renew its contract with Perfect Commerce. While the Veterans Marketplace was established as a way of generating revenue, the revenue it generated in fiscal year 2003 was negligible. Veterans Corporation officials stated that they were still in the process of building a directory of veteran-owned businesses that could provide goods and services through this effort. (Later in this report, we discuss the Veterans Marketplace as a component of the self-sufficiency plan). As shown in table 1, The Veterans Corporation’s expenses decreased in fiscal year 2003, primarily due to a reduction in the Veterans Marketplace fee. Figure 3 shows The Veterans Corporation’s expenses for both fiscal years 2002 and 2003 by function (program, fund-raising, and administrative). Financial reporting under U.S. generally accepted accounting principles requires reporting expenses by type and function. Most of The Veterans Corporation’s federally funded functional expenses pertained to program activities—64 and 72 percent for fiscal years 2002 and 2003, respectively. Fund-raising costs represented 13 percent of total expenses in both fiscal years. Administrative costs were 23 percent of the total for fiscal year 2002, primarily for salaries and board expense, and 15 percent of total expenses for fiscal year 2003, primarily for salaries and rent expenses. As mentioned in our earlier report, the amount of program activity expenses relative to total expenses increased, and the ratio of administrative expenses to total expenses decreased. Beginning in fiscal year 2002, The Veterans Corporation recognized revenue (income) from sources other than federal appropriations and interest income; however, such revenues declined significantly in fiscal year 2003. While contract and other revenue increased, total revenue declined due to a reduction in revenue from donated pledges and contributed services. Specifically, The Veterans Corporation generated approximately $45,000 from SBA, $184,000 from the VET program, and $4,000 in other funds. However, while cash contributions were higher in fiscal year 2003 than in fiscal year 2002, pledges and contributed services were significantly lower than in fiscal year 2002. The Veterans Corporation recognized approximately $258,000 in cash contributions and pledges and approximately $440,000 in contributed services and in-kind contributions as revenue. Of the $258,000, $157,000 was cash, and $101,000 was pledges for future payments of cash. As a result of the decline in revenue from other sources, the $3.3 million of federal appropriations used in fiscal year 2003 made up approximately 78 percent of The Veterans Corporation’s $4.3 million in total revenues. Figure 4 shows The Veterans Corporation’s revenue for fiscal year 2003, exclusive of federally appropriated funds and interest earned on those funds. The Veterans Corporation continues to face several challenges in achieving financial self-sufficiency. To address some of these difficulties, The Veterans Corporation has revised its plan to become financially self- sufficient. In its current plan, The Veterans Corporation has pushed back its estimated date for becoming self-sufficient from fiscal year 2004 to 2009 and based its revenue assumptions on three major sources—an electronic marketplace for veteran-owned goods and services; affinity programs including a credit card, loans, and insurance offered to veteran-owned businesses; and fund-raising. However, the self-sufficiency plan was not comprehensive in that it did not contain meaningful information on the key assumptions (such as the basis for each revenue component) underlying its revenue projections. Moreover, The Veterans Corporation faces a number of obstacles in meeting this goal including (1) identifying a sufficient number of veteran-owned businesses, (2) successfully marketing its services to this group, and (3) meeting overall fund-raising goals. For example, although The Veterans Corporation raised about $1 million to meet its mandated matching requirement in fiscal year 2003, it did so by combining the $1 million with excess matching funds generated in the prior fiscal year. Additionally, The Veterans Corporation officials indicated that the recent Department of Justice opinion on the organization’s legal status would likely affect its self-sufficiency goals. The Act requires that The Veterans Corporation implement a plan to generate private funds and become a self-sustaining corporation. Since our last report, The Veterans Corporation has revised its self-sufficiency plan. First, The Veterans Corporation pushed back the estimated date for achieving financial self-sufficiency from fiscal year 2004 to fiscal year 2009, based on lower-than-anticipated program revenues. Second, the revised plan also assumed an additional $2 million in federal appropriations in fiscal year 2005. Veterans Corporation officials explained that legislation, which seeks to provide these additional funds, was under congressional review. According to the plan, this additional revenue would allow The Veterans Corporation to build a database of veteran-owned businesses to which to market its services, the database being one of the key revenue generators presented in the plan. Third, to help assure future sustainability, The Veterans Corporation officials stated that they also were planning to substantially reduce overall expenses by about $500,000 beginning in fiscal year 2005. The officials added that the reduction would be accomplished through eliminating or scaling back certain positions and reducing travel- related expenses. Finally, The Veterans Corporation has changed the self-sufficiency plan to focus on three major sources of revenue, from which it expected to generate about $2.3 million in fiscal year 2009. Figure 5 shows The Veterans Corporation’s projected revenue by sources for fiscal year 2009. The self- sufficiency plan indicated that The Veterans Corporation is expected to have a positive cash flow of about $81,000 in fiscal year 2009. Veterans Marketplace. In fiscal year 2009, The Veterans Corporation expects that approximately 14 percent of its total revenue will come from The Veterans Marketplace, or approximately $319,000. As described earlier in this report, the Marketplace has expanded into two components in which veteran-owned businesses would supply goods and services through an electronic format to government entities and private businesses. The Veterans Corporation plans to earn income from this effort through a revenue-sharing agreement with Perfect Commerce that is partly based on volume of transactions and online purchases. Affinity programs. These are programs that provide business services to veteran-owned businesses and from which The Veterans Corporation receives a commission based on sales volume. The affinity programs include: The Veterans Corporation Platinum BusinessCard. About 9 percent of fiscal year 2009 revenue or about $200,000, would come from the credit card program. More specifically, the revenue would be generated from each newly activated account, as well as a share (0.2 percent) of eligible purchases made with the card. The Veterans Small Business Finance program. The single largest source of revenue—approximately 33 percent—which would total about $750,000 for fiscal year 2009 is expected to come from loans made to small businesses through a partner organization, Newtek Small Business Finance, Inc. These SBA-guaranteed loans range from $50,000 to $2 million and are made to qualified businesses nationwide. The Veterans Corporation receives 37 ½ basis points on all disbursed loans. The Veterans Insurance program. Approximately 7 percent of revenue, or $150,000 for fiscal year 2009, would come from sales of business insurance and other products to veteran-owned businesses. The Veterans Corporation would receive commissions or fees, which are structured differently for each insurance product, as outlined in its agreement with Aon Financial Institution Alliance. Other efforts. Other services are expected to account for an additional 15 percent of total revenues, approximately $350,000, in fiscal year 2009. These services include tax, accounting, and merchant services for small businesses and subscription sales for a bid-and-response system for veteran-owned business contracts. Fund-raising. In fiscal year 2009, fund-raising is expected to account for 23 percent of revenue, which totals about $532,000 and includes interest income. However, the self-sufficiency plan did not incorporate all of the funds The Veterans Corporation will raise. The self-sufficiency plan included only the portion retained for overhead costs, 15 percent of funds raised. The Veterans Corporation’s revenue-generating strategy relies to a great extent on first identifying the veteran-owned business population and then successfully marketing its services to this population. The Veterans Corporation officials explained that its self-sufficiency strategy is modeled on AARP, an organization that markets goods and services to a specific population (affinity group) and receives commissions based on sales volume. The Veterans Corporation’s goal to achieve financial self- sufficiency in fiscal year 2009 is based on identifying about 250,000-300,000 veteran-owned businesses. An official at The Veterans Corporation told us that this number was based on industry guidance for an e-commerce business and represents the minimum number needed to successfully market its products. As of June 2004, The Veterans Corporation had identified approximately 12,000 veteran-owned and potential veteran businesses, which represented about 5 percent of its goal. As previously reported, The Veterans Corporation has been unsuccessful in obtaining names of veteran-owned businesses through government sources. Earlier in this report, we discussed The Veterans Corporation’s difficulty in identifying its affinity group, the population of veteran-owned businesses— an ongoing problem because of privacy concerns among federal agencies. According to a Veterans Corporation official, the success of its marketing programs (and thus the key to its financial self-sufficiency) was dependent on its ability to identify and reach transitioning service members and veteran-owned businesses to market their products. As a result, The Veterans Corporation has developed another strategy in an effort to identify this population. To assist in identifying veteran business owners, in June 2004 The Veterans Corporation began testing a new marketing strategy with the help of Mal Dunn Associates, a direct marketing firm. Using Mal Dunn Associates’ database, The Veterans Corporation is soliciting about 15,000 veterans through e-mail, telemarketing, and direct mail. According to an official at The Veterans Corporation, the testing of the marketing strategy indicated that the direct mail approach was the most effective way of gaining new members and they plan to meet with Mal Dunn Associates in September 2004 to discuss the possibility of performing some additional market analysis. An official at The Veterans Corporation further explained that the goal of at least 250,000 names was based on developing an affinity relationship through The Veterans Corporation’s Web site membership. This would require identifying a larger population since not all veteran- owned businesses identified would choose to sign up for membership through the Web site. The Veterans Corporation’s self-sufficiency plan estimated that developing the veteran-owned business database would take a minimum of 24 months; however, the plan did not show the relationships between the growth of its membership and its revenue projections for any given year. Further, because this marketing effort was still in its testing stages, it would be difficult to predict the rate at which The Veterans Corporation could increase membership or overall success of the effort. Veterans Corporation officials explained that the development speed also depended on the level of financial commitment made to the marketing efforts that would identify this population. The Veterans Corporation recognized that in addition to building a database of veteran-owned businesses, the organization must also successfully market its services to this group. An official at The Veterans Corporation told us that before developing services, they would first consult with both public and private veteran service organizations to identify needs. However, the extent to which veteran-owned businesses would utilize The Veterans Corporation’s products and services is not fully known because of its limited marketing efforts to this population to date. As mentioned previously, The Veterans Corporation has identified and marketed its products to about 12,000 veteran-owned and potential veteran businesses, which represents about 5 percent of its targeted goal. For instance, the Veterans Marketplace, an electronic exchange of veteran- owned goods and services, has yet to produce any meaningful revenue. Although two pilots are currently under way, The Veterans Corporation officials explained that they were reluctant to aggressively market this effort until they had a sufficient variety of veteran-owned suppliers to meet the production needs of potential buyers. The officials also pointed out that the Veterans Insurance program had not realized much revenue to date and attributed this to an inability to offer a wider range of products such as a pool for businesses to self insure their worker’s compensation or health insurance plans. The Veterans Corporation’s enabling legislation requires it to match on a dollar-for-dollar basis the $2 million it received in federal appropriations for fiscal years 2003 and 2004. In fiscal year 2003, The Veterans Corporation generated about $1 million in matching funds, of which about $698,000 was in the form of cash contributions and in-kind contributions. The Act however, does not specify how or when the funds are to be generated. Thus, to meet the matching requirement for 2003, The Veterans Corporation applied funds generated in the prior fiscal year, about $1 million, which represented the excess of its fiscal year 2002 match. Additionally, The Veterans Corporation officials told us that they have had difficulty in meeting their overall fund-raising goals, which amounts are intended to cover the expenses of the VET program, CBO activities, and overhead costs identified in the self-sufficiency plan. Initially, Veterans Corporation officials attributed this difficulty to economic downturns during the first years of the organization’s existence, which resulted in an overall reduction in financial donations to charitable organizations. The officials also cited difficulties convincing private corporations to donate to veteran causes because of a widely held belief that the federal government was taking care of veterans financially. However, since our last report, The Veterans Corporation has formed a fund-raising advisory board of 23 individuals. The Veterans Corporation’s current strategy is to focus more on wealthy veteran entrepreneurs who can identify with other veterans, and less on corporations and foundations. In May 2004, The Veterans Corporation refocused its fund-raising effort, releasing its outside consultant and hiring an in-house fund-raising staff. Veterans Corporation officials acknowledged that the recent change to an in-house fund-raiser has had an impact on its overall fund-raising goals for fiscal year 2004. The Veterans Corporation has a fund-raising goal of $2.5 million in cash and pledges, with at least $1.4 million in cash for fiscal year 2004. As of June 30, 2004, it did not appear that The Veterans Corporation was going to be able to raise funds sufficient to match the $2 million made available under the matching fund certification provision. As of that date, The Veterans Corporation had generated nearly $296,000 in nonfederal dollars, which included $172,000 in cash and in-kind donations. The corporation’s inability to raise the certification amount is something that Congress could take into consideration in any future appropriation. The Veterans Corporation’s corporate management indicated that they did not know the full impact of the Department of Justice’s recent legal opinion (that is, that The Corporation is a “government corporation” and an “agency”)—specifically the cost and burden of complying with federal administrative laws applicable to the corporation because of its status as an agency. However, the officials believed that the corporation’s self- sufficiency efforts likely would be significantly slowed or even stopped if the corporation were subject to such requirements. As mentioned previously, OMB has notified The Veterans Corporation that it was required to comply with laws, regulations, and guidance applicable to all executive branch agencies (unless specifically exempt), including OPM requirements on reporting government employees, laws pertaining to federal employees and budget and accounting requirements. This, according to The Veterans Corporation, likely would result in a significant increase in workload and expenses. Further, the officials stated that the corporation’s status as an agency could raise uncertainties about its ability to raise private funds as a source of revenues because of restrictions on federal agency collection and use of nonappropriated funds. At the time of our review, The Veterans Corporation was seeking legislative relief to address this issue. We spoke to representatives of the Urban Institute’s Center on Nonprofits and Philanthropy to gain insight on how The Veterans Corporation compares with similar nonprofits. Based on information we provided on The Veterans Corporation’s mission, business model, and self-sufficiency projections, the representatives provided some of their perspectives. First, they pointed out that a private business model such as that of The Veterans Corporation is riskier than organizational models based on receiving a government subsidy. Second, in terms of The Veterans Corporation’s self- sufficiency projections, including revisions, they questioned its ability to generate $2 million annually without federal appropriations and stated that it was a daunting, if not unlikely task, given its mission to create and implement a new business model. However, they suggested that programs that provide veteran entrepreneurial services could be considered a public good and that, even if they did not become self-supporting, their purpose and public benefits might justify both public and private financial contributions. We believe these observations are pertinent and important considerations that might provide Congress with some additional insight as The Veterans Corporation strives to serve veteran entrepreneurs and achieve financial self-sufficiency. While we discussed The Veterans Corporation’s strategic planning and reporting efforts earlier in this report, self-sufficiency was also one of the corporate goals identified in its strategic plan. However, in reviewing The Veterans Corporation’s annual report to Congress for fiscal year 2003, we found that The Veterans Corporation did not incorporate its plans for and progress toward self-sufficiency in this report. According to The Veterans Corporation’s board, the March 2004 revision of the self-sufficiency plan was the first version that contained written information about programs and other activities and plans for generating revenue to achieve self- sufficiency. Prior to this, the plan only provided revenue projections. The Veterans Corporation's current self-sufficiency plan has been revised but did not contain meaningful information on key assumptions addressing program outcomes, participation, and revenues. For example, the current self-sufficiency plan did not provide information on the basis for expected revenue growth for its affinity programs—credit cards, loans, insurance, and other efforts involving private-sector partners—which is expected to go from $33,000 in fiscal year 2004 to $1,450,000 in fiscal year 2009. The current self-sufficiency plan also did not discuss how The Veterans Corporation would build a database of sufficient size to market its services. As we previously discussed, The Veterans Corporation has faced continued challenges in building its database and is testing an approach that may or may not be successful. However, the self-sufficiency plan did not contain alternative scenarios that would allow a more comprehensive understanding of the reasonableness of its projections. As one result, an annual report to Congress derived from the current plan would still lack information that could assist Congress in its oversight of The Veterans Corporation and help it to obtain a clearer picture of the organization’s progress toward achieving its self-sufficiency mandate. In creating The Veterans Corporation, Congress created broad mandates for the organization to address. The Veterans Corporation is working to fulfill its business training and assistance mandates by starting new programs and expanding and refocusing others to serve veteran entrepreneurs. However, The Veterans Corporation continues to face challenges—such as making the PCAB operational, identifying the veteran population through government sources, and addressing concerns related to its legal status—that have hindered its initial progress in marketing services to veteran businesses and working with public and private entities. Moreover, the Veterans Corporation was not effectively utilizing operational controls—that is, those policies and procedures that would allow it to obtain the reliable and timely information necessary to achieve intended results and goals. For instance, measurable, outcome-oriented goals and objectives in the strategic plan and annual report could help staff to track the performance of their programs, make improvements from year to year, and ensure that their programs succeed and remain aligned with their corporate mission. Furthermore, outcome-oriented goals would improve The Veterans Corporation’s reports to Congress and the public by providing clear evidence that the mission of the organization was being accomplished. As a nonprofit organization with limited sources of income, finding opportunities to reduce expenses also would benefit The Veterans Corporation as a whole. We recognize that The Veterans Corporation has not yet realized significant revenue from its programs; however, we note that it could reduce expenses in some programs. A key program, the VET program, is an expense-related activity rather than a revenue-generating activity. In fiscal year 2003, The Veterans Corporation spent approximately $1,382 for each veteran who graduated from the program; more than half of this amount went for vouchers, provided to each course graduate, to purchase a computer or business tools. Providing each course graduate with such a voucher increases the cost to the organization of operating the program and could deny The Veterans Corporation added funds to enhance course offerings or marketing. While The Veterans Corporation’s revised financial self-sufficiency plan indicated it should reach its goal of self-sufficiency by fiscal year 2009, it is only a predictor of what could occur based on several key assumptions. Those assumptions were difficult to assess because the plan was not comprehensive in that it did not contain meaningful information on the key assumptions underlying revenue projections. Moreover, the plan is fundamentally dependent on the ability to build a database of veteran- owned businesses and then successfully market The Veterans Corporation’s products and services to that population—goals that will not be easy to achieve, given the challenges we have described in this and our previous report. Finally, including information about self-sufficiency in its annual reports to Congress would help Congress better assess the progress of the organization and make informed decisions about the future of The Veterans Corporation. We recommend that the Chairman of the Board of Directors for The Veterans Corporation and its staff take the following three actions: To help guide programs and measure their effectiveness, develop measurable, outcome-oriented goals and objectives that take into account the increasing availability of outcome data over time. To potentially reduce overall expenses and aid in efforts to achieve self- sufficiency, analyze the extent to which The Veterans Corporation could reduce or eliminate the amount of the voucher given to graduates of its VET program without undermining demand for the program. To improve congressional oversight, include in its annual report to Congress comprehensive information and data relating to progress in achieving financial self-sufficiency, and key assumptions underlying self-sufficiency revenue projections. We requested and obtained comments on a draft of this report from the President and Chief Executive Officer of The Veterans Corporation that are reprinted in appendix IV. We also provided a draft of this report to DOD, SBA, and VA. We received technical comments from The Veterans Corporation and SBA that we incorporated where appropriate. While The Veterans Corporation had no objections to our recommendations, it offered information that it believed would explain, clarify, or correct points made in the draft report. First, on the extent of duplication between The Veterans Corporation’s Business Directory and VA’s VetBiz Vendor Information Pages, The Veterans Corporation stated that, although we accurately portrayed many of the differences between the two sources, we understated the importance of its directory to its operations and self-sufficiency efforts. The Veterans Corporation also stated that it believed that, as the two databases grew, they would continue to differ in their composition, customers, and beneficiaries. In our report, we acknowledged that both The Veterans Corporation and VA had different motivations for creating their directories. We also stated that the directories were of similar size, were developed from similar information sources, and employed similar methods to identify and register veteran-owned businesses on their sites. The second and third points were directed toward strategic planning. In the second point on developing outcome-oriented metrics that could be used in reporting to Congress, The Veterans Corporation indicated that it had not been in existence long enough to determine whether its programs and services were helping businesses owned by veterans and service-disabled veterans. It also indicated it had begun the process of developing metrics in some areas that, over time, would help it build outcome-oriented performance measures into its reporting. Although many of its programs are still getting under way, we believe it is useful to identify and articulate specific metrics as an operational control and as a means to evaluate the benefits being provided by The Veterans Corporation to veterans. This information would also help The Veterans Corporation ensure that the correct data will be collected in the future. In the third point, The Veterans Corporation indicated that it believes that the strategic goals set by the board should not be outcome-specific, as they were meant to provide a general framework for the corporation. The staff, however, are required to develop specific objectives, initiatives, and performance metrics in support of the strategic goals. We do not intend to imply that these goals are necessarily the responsibility of the board, but we do believe they need to exist somewhere in the strategic plan and that they should form the basis for the annual report, both to provide Congress with better accountability and The Veterans Corporation with a better mechanism for demonstrating organizational effectiveness and outcomes for veterans. Finally, in its fourth point, The Veterans Corporation indicated that, although its long-term survival was not guaranteed, it believed that its strategy was sound and that sound execution of its plan would result in achieving its self-sufficiency goal. We focused our analysis on the current state of federal funding, The Veterans Corporation’s self-sufficiency projections, and the likelihood that the funding would no longer remain needed based on our belief that Congress would want to consider different perspectives on The Veterans Corporation’s ability to become self- sufficient, particularly in the event that The Veterans Corporation’s self- sufficiency projections were revised again. With this focus, we have concluded that there is a reasonable amount of uncertainty regarding The Veterans Corporation’s attainment of self-sufficiency. The uncertainty about self-sufficiency is reflected in The Veterans Corporation’s revision in its target date for achieving financial self-sufficiency from fiscal year 2004, as we reported in April 2003, to fiscal year 2009—with the addition of $2 million of federal appropriations in fiscal year 2005. As noted in this report, The Veterans Corporation faces several challenges in its efforts to becoming a self-sustaining organization. Notably, its self-sufficiency plan, which did not contain meaningful information on the key assumptions underlying revenue projections, is dependent on building an extensive database of veteran-owned businesses and marketing its services effectively to this population. Based on our analysis of these challenges, it is not certain whether The Veterans Corporation’s current estimate for achieving self-sufficiency will be met as planned. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Ranking Minority Members of the Senate Committee on Small Business and Entrepreneurship, the House Committee on Small Business, the Senate and House Committees on Veterans’ Affairs, and other appropriate congressional committees. We also will send copies to the President and CEO of The Veterans Corporation; the Administrator of SBA; and the Secretaries of the Departments of Veterans Affairs, Defense, and Labor. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. To evaluate The Veterans Corporation’s efforts in providing small business assistance to veterans, we collected and analyzed program information such as planning documents, contracts, legal opinions, program literature, and activity reports. Additionally, we interviewed staff and board officials from The Veterans Corporation, as well as partnering organizations including officials from Perfect Commerce and Aon Financial Institution Alliance. We also interviewed officials from federal agencies including the Small Business Administration, Department of Defense, Department of Veterans Affairs, and Department of Labor, and officials from two veteran service organizations—the Vietnam Veterans of America and the American Legion. We also reviewed program information and Web sites of these organizations. To evaluate The Veterans Corporation’s internal controls including strategic planning and its use of federal funds, we obtained and analyzed The Veterans Corporation’s fiscal year 2003 financial statements, audit reports, and management letter for 2003; we did not evaluate the quality of the external auditor’s work on the financial statement or conduct our own tests of the financial statement balances; analyzed 10 functional expenses to determine the nature of the expense and a description of how the expense benefited The Veterans Corporation; obtained and reviewed The Veterans Corporation’s check registers for reviewed The Veterans Corporation’s contract with the external auditor responsible for the 2003 financial statement audit to understand the nature of the audit services to be provided and what work the auditor proposed to assess internal controls; communicated with The Veterans Corporation’s external auditor to determine the audit procedures performed to assess internal controls during its audit of The Veterans Corporation; obtained and reviewed minutes of meetings of the board of directors and the board’s executive committee to determine the board’s policies as they related to the disbursement and use of federal funds; interviewed The Veterans Corporation’s Chief Executive Officer/Chief Financial Officer, Senior Vice President, and staff to obtain an understanding of internal controls related to cash disbursements; tested relevant internal controls over cash disbursements to determine if the controls were operating effectively; interviewed members of the board of directors to determine the board’s oversight roles and responsibilities; reviewed The Veterans Corporation’s planning and reporting consulted with government and nonprofit strategic planning experts; reviewed strategic planning literature; gathered and analyzed salary surveys and literature about nonprofit interviewed representatives from the Urban Institute’s Center on Nonprofits and Philanthropy to discuss executive compensation in the nonprofit sector. To evaluate The Veterans Corporation efforts to become financially self- sufficient, we reviewed its self-sufficiency plan and discussed it with the Veterans Corporation’s Chief Executive Officer/Chief Financial Officer, Senior Vice President and members of its board of directors. We also spoke to representatives of the Urban Institute’s Center on Nonprofits and Philanthropy to gain insight on how The Veterans Corporation compares with similar nonprofits. We did not independently assess the financial assumptions presented in the self-sufficiency plan. We conducted our work between December 2003 and July 2004 in accordance with generally accepted government auditing standards in Washington, D.C.; San Francisco, California; and Alexandria, Virginia. Assist veterans, including service-disabled veterans, with the formation and expansion of small businesses. Organize public and private resources, including those of federal agencies. Establish and maintain a network of information and assistance centers for use by veterans and the public. Establish Professional Certification Advisory Board. Assume duties, responsibility, and authority of the Advisory Committee on Veterans Affairs on October 1, 2004. Institute and implement a fund-raising and self-sufficiency plan. Raise matching funds to fulfill conditions for receipt of federal funds. Transmit an annual report to the President and to Congress. Have board of directors conduct oversight of corporation’s obligations and expenses. As noted in table 2, the Veterans Corporation received federal appropriations of $4 million in fiscal year 2002 and $2 million in fiscal year 2003; it also had unexpended appropriations available. The Veterans Corporation used approximately $3.7 million and $3.3 million in fiscal years 2002 and 2003, respectively, thus leaving it with a balance of approximately $3.3 million and $1.9 million in unexpended appropriations at the end of fiscal years 2002 and 2003, respectively. As shown in table 3, federal appropriations were the major source of revenue for The Veterans Corporation in fiscal years 2002 and 2003. In fiscal year 2003, The Veterans Corporation did not realize much revenue from cash contributions and pledges and contributed services and in-kind contributions. The Veterans Corporation reported approximately $258,000 in cash contributions and pledges in 2003 as revenue. More than half of the revenue, $157,000 was cash contributions. The remaining $101,000 was recorded as pledges that The Veterans Corporation expected to receive in future years as contributions receivable at their present value in accordance with U.S. generally accepted accounting principles for not-for- profit organizations. See table 4 for a schedule of The Veterans Corporation’s contributions receivable as of September 30, 2003. Table 5 presents The Veterans Corporation’s federally funded expenses by functional area for fiscal years 2002 and 2003. Expenses related to program activities represent the majority of The Veterans Corporation’s expenses. The percentage of total expenses accounted for by program activities increased, fund-raising expenses remained the same, and administrative expenses decreased. In addition to the persons named above, Elizabeth H. Curda, Janet Fong, Yola Lewis, Brittni Milam, Marc W. Molino, Julie T. Phillips, Barbara M. Roesmann, and Paul G. Thompson made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” | The National Veterans Business Development Corporation (The Veterans Corporation) was created under Pub. L. No. 106-50 to provide veterans with small business and entrepreneurship assistance. The Act authorized, and Congress has appropriated to the corporation, $12 million in funding over 4 years, ending September 30, 2004. The Act also required that The Veterans Corporation implement a plan to raise private funds and become a self-sustaining corporation. GAO evaluated the corporation's: (1) efforts in providing small business assistance to veterans; (2) internal controls, including strategic planning; and (3) progress in becoming financially self-sufficient. Since GAO's April 2003 report (GAO-03-434), The Veterans Corporation has continued to expand programs and refocus services in its efforts to provide small business assistance to veterans while achieving financial self-sufficiency. The centerpiece of The Veterans Corporation's efforts remains its Veterans Entrepreneurial Training program, which offers classroom instruction to veterans on how to successfully start and expand their own businesses. It also has expanded or added several services primarily in the areas of finance, accounting, and contracting. However, The Veterans Corporation reported that it continues to face ongoing challenges to fulfilling its mission. These problems stem from its responsibility for the Professional Certification Advisory Board, difficulties in identifying the veteran-owned business population, and conflicting views about its legal status as a private versus public entity. Additionally, The Veterans Corporation lacked important internal or operational controls. Specifically, its strategic plan and annual report to Congress lacked measurable goals and outcome-oriented measures. Without outcome-oriented measures, such as the number of new veteran-owned businesses or the amount of revenue generated for veteran-owned businesses, it was difficult to determine what the impact of the programs on veterans has been. In the same vein, without meaningful performance measures, The Veterans Corporation has been unable to provide Congress with significant data on its progress or the outcomes of its efforts. Finally, The Veterans Corporation faces a number of challenges in achieving self-sufficiency. Dramatically lower-than-expected revenues have resulted in the corporation revising its currently estimated date for achieving self-sufficiency from fiscal year 2004 to fiscal year 2009. Its self-sufficiency strategy is heavily dependent on its ability to develop a database of veteran-owned businesses and successfully marketing its services to these businesses. However, the plan did not discuss how The Veterans Corporation will identify this population or contain meaningful information on key assumptions underlying revenue projections. As such, it would be difficult for Congress and other stakeholders to judge the feasibility or reasonability of the corporation's estimates and projections. |
Soon after the creation of EPA, the library network was formed to provide staff and the public with access to environmental information in support of EPA’s mission to protect human health and the environment. Established in 1971, the network is composed of libraries and repositories located in the agency’s headquarters, regional offices, research centers, and laboratories throughout the country. The combined network collection contains a wide range of general information on environmental protection and management; basic and applied sciences, such as biology, chemistry, engineering, and toxicology; and extensive coverage of topics featured in legislative mandates, such as hazardous waste, drinking water, pollution prevention, and toxic substances. Several of the libraries maintain collections that are focused on special topics to support specific regional or program office projects. As such, the libraries differ in function, scope of collections, extent of services, and public access. During this period, EPA’s library network operations were guided by EPA’s Information Resources Management Policy Manual. Chapter 12 of the policy manual stipulated that the library network provide EPA staff with access to information to carry out the agency’s mission, and that the libraries provide state agencies and the public with access to the library collection. Chapter 12 also established the role of the national program manager with responsibility for coordinating major activities of the EPA library network. A national program manager within OEI is responsible for coordinating the major activities of the EPA library network. The role of the national program manager is to work with the library network and its managers to provide several essential services, such as assessing the needs of program staff and providing services to meet those needs. Unlike other national program manager positions at EPA, the national program manager for the library network does not have budget authority for the libraries. Before the 2007 reorganization, 26 libraries comprised the library network, each funded and managed by several different program offices at EPA: 1 library was managed by OEI, 10 libraries were managed by regional offices, 8 libraries were located at EPA laboratories within the Office of Research and Development (ORD), and 2 libraries were located within OARM. In addition, each of the following program offices had 1 library: Office of the Administrator, Office of General Counsel, OPPTS, Office of Enforcement and Compliance Assurance, and Office of Air and Radiation. In addition to its physical locations and holdings, the EPA network provides access to its collections through a Web-based database of library holdings—the Online Library System (OLS)—that is known as EPA’s online “card catalog.” OLS enables EPA staff and the public to search for materials in any of the EPA libraries across the country that are part of the network. According to EPA estimates, the combined EPA collection in 2003 included 504,000 books and reports; 3,500 journals; 25,000 maps; and 3,600,000 information objects on microfilm. If an item is not available on- site to EPA staff or the public, it is made available through interlibrary loan from another library within the network or another public library. Up to 26,000 of these EPA documents are available electronically to EPA staff and the public through a separate online database—the National Environmental Publications Internet Site (NEPIS). In addition, EPA staff have access to over 120,000 information sources—such as online journals, the Federal Register, news, databases of bibliographic information, and article citations—from their desktop computers. Librarians are available to assist EPA staff and the public, and, as of March 2007, professional librarian staff accounted for just over 36 full-time- equivalent employees. In addition to these 6 federal librarians and 30 contract librarians, several other staff, such as technical specialists and library technicians, also work at the libraries. Library staff provide a number of services to both EPA staff and the public, including (1) support for EPA scientists and technical staff, such as responding to quick and extended reference questions, conducting literature and database searches, and providing training to EPA staff on how to conduct their own searches; (2) support for EPA enforcement staff, such as conducting legal or business research and providing scientific and technical information to support enforcement cases; (3) collection cataloging and maintenance; and (4) support for the general public, such as answering quick and extended reference questions, and providing training on how to search EPA databases. In fiscal year 2005, the services provided to EPA staff by librarians at OEI headquarters and regional office libraries included 41,029 quick and extended reference checks, 8,286 interlibrary loans, and 85,226 database and literature searches. These librarians also provided EPA staff with 52,975 resources, such as books and journal articles. Beginning in 2003, EPA conducted a business case assessment of its library network and a study of options for future regional library operations. These two studies, which primarily focused on the OEI headquarters library and the regional office libraries, were intended to determine the value of library services and inform management in the regions of their options to support library services beyond fiscal year 2006. In August 2005, regional management formed a Library Network Workgroup, composed of regional and headquarters library managers as well as library managers from OARM and the National Environmental Investigations Center libraries, to review the two reports and develop recommendations on ways to maintain an effective library network if the library support budget were reduced. The workgroup issued its internal report, EPA Library Network: Challenges for FY 2007 and Beyond, in November 2005. After the Library Network Workgroup’s report was issued, EPA established a Library Steering Committee, composed of senior managers from EPA’s program offices and regions, to develop a new model for providing library services to EPA staff. As such, the steering committee reviewed the recommendations made by the workgroup and, in August 2006, issued the EPA FY 2007 Library Plan: National Framework for the Headquarters and Regional Libraries. See figure 1 for a timeline of the assessments and planning efforts that EPA conducted and library network reorganization activities. The August 2006 library plan provided the framework for the network to begin reorganizing in the summer of 2006 in preparation for the proposed fiscal year 2007 budget reduction beginning in October 2006. (In September 2004, a Region 2 laboratory library in Edison, New Jersey, closed, and a Region 3 laboratory library in Fort Meade, Maryland, closed access to the public in February 2006.) The plan describes a “phased approach” to disperse and dispose of library materials in the libraries that will close. The plan also provided guidelines for EPA staff to determine how the collections are to be managed. According to the plan, OEI libraries in Regions 5, 6, and 7 would close and the headquarters library would close physical access to its collection but would function as one of three repository libraries. OARM libraries located in Cincinnati, Ohio, and Research Triangle Park, North Carolina, would serve as the other two repositories. In addition, according to the plan, EPA is developing Library Centers of Excellence, where a library with more expertise in a specific area of reference research would provide that service to staff in other regions. Members of Congress and congressional committees, library professional associations, public interest groups, and individuals have expressed several concerns about the reorganization of the library network. Specifically: During the reorganization, several Members of Congress submitted letters to EPA and to the President asking to restore funding or asking for specific information regarding the reorganization. In a February 2006 letter, representatives of 4 library associations asked the House Committee on Appropriations to restore the budget cuts to the library network and to require EPA to develop an information management strategy. In a June 2006 letter, the presidents of 16 local unions, representing over 10,000 EPA scientists, engineers, and environmental protection specialists, protested the budget cut to the library network to the Senate Committee on Appropriations. In August 2006, the American Federation of Government Employees National Council of EPA Locals filed a formal grievance, requesting that negotiations be held with the union regarding the library network reorganization. As a part of EPA’s 2006 reorganization effort, some EPA libraries have closed, reduced their hours of operation, or changed the way that they provide library services. Furthermore, some of these libraries have digitized, dispersed, or disposed of their materials. As noted in EPA’s August 2006 library plan, 1 OEI headquarters library closed and 3 regional office libraries closed; but during the same period, 6 other libraries in the network independently decided to change their operations—1 closed, 4 reduced their hours of operation, and 1 changed the way that it provides library services. Sixteen EPA libraries have not changed. During the reorganization effort, each of the libraries in the network made its own decision on how it would manage its collection—some digitized, or have plans to digitize, some of their materials; some dispersed their materials to EPA and non-EPA libraries; and some disposed of their materials. After making these changes, EPA has begun to develop a common set of agencywide policies and procedures for the library network. EPA is waiting to complete these policies and procedures before lifting a moratorium on further change. The future of EPA’s library network—its configuration and its operations—are contingent on the final policies and procedures, on EPA’s response to directions accompanying its fiscal year 2008 appropriation, and on EPA’s 2008 library plan. Due to the decentralized nature of the EPA library network, each library decided on its own whether to close, reduce hours of operation, change the way that it provided library services, or make no changes in order to prepare for a proposed budget reduction. As table 1 shows, 4 libraries—as noted in EPA’s library plan—closed physical access to their libraries. Furthermore, 1 additional library in the network closed, 4 reduced their hours of operation, and 1 changed the way that it provides library services. However, these changes were not noted in EPA’s library reorganization plan. Sixteen libraries in the network did not institute any changes. EPA’s August 2006 library plan notes that three regional libraries— Regions 5 (Chicago), 6 (Dallas), and 7 (Kansas City)—and the headquarters library in Washington, D.C., would close physical access to their libraries. In addition, OPPTS officials decided to close the Chemical Library; however, this closure was not noted in the plan. According to EPA officials, the plan focused on the OEI headquarters and regional office libraries, and they did not think it was necessary to reflect changes that were planned for other libraries. The focus of the plan, according to EPA officials, was to set the framework on how library services would be provided electronically and not on what physical changes in the network were to occur. Although no longer accessible to walk-in traffic from EPA staff and the public, the closed regional and headquarters libraries continue to provide library services, such as interlibrary loans and research/reference requests, to EPA staff through service agreements that the closed libraries established with libraries managed by OARM—located in Cincinnati, Ohio, or Research Triangle Park, North Carolina—or with the Region 3 library located in Philadelphia, Pennsylvania. Service agreements have been established between (1) the Cincinnati library and Region 5, (2) the Research Triangle Park library and headquarters as well as Regions 6 and 7, and (3) Region 3 and Region 7. According to OPPTS officials, library services are provided to OPPTS staff through a service agreement that the headquarters library has established with the Research Triangle Park library, although OPPTS is not a signatory to the service agreement. The library plan noted that the public would access materials previously held by the closed regional and headquarters libraries, either electronically using NEPIS, a database of electronic EPA publications, or physically using interlibrary loan. For the regional libraries that had closed, their library spaces remain unused. The Region 5 library space is empty, with all of its shelving and furniture sold through a General Services Administration (GSA) auction for $327. According to Region 5 officials, the space is occasionally used for meetings, but no plans have been made on how the space will be used. Many of the library materials remain on shelves in Regions 6 and 7 because of the moratorium. According to a Region 7 official, because the library space is not being maintained, some of its shelving has been removed and used for other purposes. EPA officials noted that they plan to use the headquarters and Chemical Library spaces for the headquarters repository, which would house repository materials and the Chemical Library collection (see fig. 2 for a photograph of boxed-up books from the Region 5 library, now located at the headquarters repository library). However, the library space in the Chemical Library is currently being used as office space, although nearly half of the space is devoted to shelving that cannot be removed because it is considered historical. Of the four regional libraries that decided to reduce their hours of operation, Regions 9 and 10 reduced their hours by about 30 percent, and Regions 1 and 2 reduced their hours by more than 50 percent. The library plan did not note that these libraries would be reducing their hours. As we have previously noted, the focus of the plan, according to EPA officials, was to set the framework on how library services would be provided electronically and not on what physical changes in the network were to occur. As such, EPA officials stated that they did not think it was necessary to list in the plan which libraries were planning on reducing hours. Also, as noted in table 1, the Region 4 library changed the way that it provided library services to its regional staff. While the library is accessible to EPA staff and the public, and materials remain in place, the library reduced the number of on-site contract librarians and established a service agreement with the OARM library in Cincinnati, Ohio, to provide Region 4 EPA staff with some core library services. These core services include interlibrary loans, cataloging, online literature searches, and reference and research requests. There is currently one full-time professional federal librarian located at the Region 4 library. The library plan did not note that Region 4 would change the way that it provides library services to its staff and the public. As part of the library reorganization, each library in the network that was planning to close access to walk-in services independently decided which materials would be retained at their library or be selected for digitization, dispersal to EPA or non-EPA libraries, or disposal. To assist libraries in the regions and headquarters in determining which actions to take, OEI, in the library plan, issued general guidance and criteria as well as digitization and dispersal procedures that outlined the types of materials that could be (1) digitized and included in NEPIS or dispersed to other EPA network libraries, (2) dispersed to non-EPA libraries, and (3) disposed of or recycled. Furthermore, the guidance instructed libraries downsizing or eliminating their collections to, among other things, follow all applicable government property rules and regulations, obtain the advice of the Office of General Counsel or Regional Counsel regarding the materials needed for rulemaking or litigation purposes, consult EPA staff experts in different disciplines for their views on what to retain, review journal titles to determine if they are available online or elsewhere in the library network, and update cataloging records. Furthermore, the guidance discouraged the establishment of minilibraries. Table 2 shows the actions taken by the closed libraries. In terms of digitization, the criteria in the August 2006 library plan noted that unique EPA materials—which, according to EPA officials, refers to materials created by or for EPA—that are not already electronically available in NEPIS would be digitized and made available in NEPIS. The plan indicated that these materials from libraries closing physical access would receive first priority for digitization and, according to EPA officials, set a digitization deadline for these materials by January 31, 2007. With the exception of the OPPTS Chemical Library, all of the libraries that closed digitized unique EPA materials from their library. At the time of our review, 15,260 titles had been digitized, and EPA anticipates that about 51,000 unique EPA library materials from closed and open libraries will be digitized. OARM, in Cincinnati, was responsible for digitizing materials and dispersing the hard copy of these materials to an EPA repository or, if applicable, an originating library. Some officials we talked with at libraries that have not yet digitized materials indicated that they would like to do so in the future. In terms of dispersal, EPA’s library plan noted that a library choosing to disperse its materials can do so to one of the EPA-designated repositories and other libraries in the library network, or it can transfer EPA records to EPA regional record management centers. The plan also provided guidance on what types of materials can be dispersed to the repository libraries—EPA materials that EPA staff do not use frequently and that are not available electronically, out-of-print publications, and materials that have historical significance. In addition, materials that repository libraries do not need or that other network libraries will not accept can be dispersed to, in order of preference, other federal agency libraries, state libraries and state environmental agency libraries, colleges and university libraries, public libraries, or e-mail networks used specifically to exchange library materials. The plan also noted that some materials can be dispersed to the Library of Congress and program office staff. Materials that were dispersed from the closed libraries were dispersed to other libraries within the network as well as to non-EPA libraries, including other federal agencies, state governments, universities, and private companies. No open libraries dispersed their materials as part of the reorganization effort. Table 3 shows the general location of where a majority of the dispersed materials from the closed libraries were sent. Finally, in terms of disposal, the OEI headquarters library and the OPPTS Chemical Library disposed of some of its materials as a part of the reorganization. EPA’s library plan noted that materials not claimed during the dispersal process could be destroyed if they were (1) materials that are published commercially and that are outdated; (2) materials in poor physical condition, unless their content is rare or the item is the last copy in the network and is not available elsewhere electronically; and (3) microfilm of journals that are available through online archives. OPPTS officials told us that they had followed OEI’s criteria and related procedures. In total, the OEI headquarters library has disposed of over 800 journals and books, and the Chemical Library has disposed of over 3,000 journals and books. Recognizing that libraries could function more cohesively as a network, EPA established a new interim library policy in 2007, which superseded Chapter 12 of the Information Resources Management Policy Manual and established uniform governance and management for the network. This interim policy held the Assistant Administrator for Environmental Information responsible for the management of the EPA library network, including setting policy and supporting procedures, standards, and guidance to ensure effective oversight. The policy also (1) made assistant and regional administrators of network libraries responsible for complying with agencywide library policies, procedures, standards, and guidance and (2) reestablished the National Library Program Manager position, which was left vacant from 2005 through 2007, when many changes related to the reorganization occurred. This interim policy resulted in 12 draft agencywide library procedures, including procedures on digitizing and dispersing library materials, developing use statistics, providing public access, providing reference and research assistance, and developing a communication strategy. EPA officials told us that they do not have a time frame for completing these procedures but will complete them before the Chief Information Officer and Assistant Administrator of OEI lifts the moratorium on changes to the network, which was imposed in January 2007 in response to congressional and other concerns, and extended indefinitely in February 2007. The moratorium directed EPA staff to make no changes to library services, including closing libraries; reducing hours of operations, services, or resources; and dispersing and disposing of library materials. The future of the library network, its configuration, and its operations are contingent on the completion of the final policies and procedures, on EPA’s response to directions accompanying its fiscal year 2008 appropriation, and on EPA’s 2008 library plan. In an explanatory statement accompanying the fiscal year 2008 Consolidated Appropriations Act, which provides funding for most federal agencies, including EPA, $1 million was allocated to restore the network of EPA libraries that were recently closed or consolidated. In addition, the explanatory statement directed EPA to submit a plan to the Committees on Appropriations within 90 days of enactment regarding actions it will take to restore the network. The act was signed by the President on December 26, 2007, and EPA had not yet submitted a plan. Separately, EPA officials told us that they are working on developing a Library Strategic Plan for 2008 and Beyond, which details EPA’s library services for staff and the public and a vision for the future of the library network. EPA’s primary rationale for reorganizing its library network was to generate cost savings by creating a more coordinated library network and increasing the electronic delivery of library services. However, EPA did not fully complete several analyses, including many that its 2004 study recommended. In addition, EPA’s decision to reorganize its library network was not based on a thorough analyses of the costs and benefits associated with such a reorganization. Therefore, we believe that EPA’s decision to reorganize the network was not fully justified. EPA’s 2004 Business Case report was initiated because of ongoing budget uncertainties and of changes in technology and in how users obtain information and how commercial information resources are made available. The report concluded that EPA’s libraries provide “substantial value” to the agency and the public, providing benefits ranging between $2.00 and $5.70 for every $1.00 spent on the network. These benefits are based on time saved in finding information with the assistance of a librarian. The calculated benefit-cost ratio varied, depending on the dollar value ascribed to time savings and the type of service provided. The report also noted other unquantifiable benefits, such as the higher quality of information typically found with a librarian’s assistance. Nevertheless, in response to changing conditions, the Business Case raised concerns about the agency’s ability to continue services in its present form. As such, the report recommended that EPA take the following actions to help facilitate an agencywide dialog regarding the future of the library network: survey EPA staff who use the libraries at each location to characterize inventory information resources, including books, journal subscriptions and licenses, databases, and other licensed information as well as library service contracts; characterize and assess organizational, business, and technological factors that either enable or constrain services and resources; develop models of library services that address the individual needs of participating locations, while leveraging available resources; and review the existing policy framework for information resources and develop revisions to address the roles and responsibilities of regional offices, centers, laboratories, and program offices in providing information services to staff. In addition, federal guidance states that a benefit-cost analysis should be conducted to support decisions to initiate, renew, or expand programs or projects, and that in conducting such an analysis, tangible and intangible benefits and costs should be identified, assessed, and reported. One element of a benefit-cost analysis is an evaluation of alternatives that would consider different methods of providing services in achieving program objectives. After issuing the Business Case report, EPA conducted several assessments of its library network. For example, in its Optional Approaches report, EPA provided information to EPA regional management about their options for supporting library services beyond fiscal year 2006. The information and options provided were based on several assessments of the network, such as consultation visits and staff surveys. In addition, some libraries conducted their own assessment of services. For example, after the fiscal year 2007 budget cut was proposed, Region 1 assessed the core library services it provided, library use, and the possible effects of the fiscal year 2007 budget reduction on providing core services and presented a range of options to regional management for consideration. EPA did not fully complete its assessments, however, before it closed libraries and began to reorganize the network. The assessments were incomplete for the following reasons: EPA did not adequately survey library users to determine their needs. EPA administered a survey to compare and contrast the relative value of library services across program and regional offices and ascertain the willingness of library users to accept electronic resources and services; however, only 14 percent of EPA staff responded to the survey. With such a low response rate, EPA could not adequately determine user needs. The survey also did not ask questions that would allow the agency to adequately characterize the needs of library users in reorganizing the library network. In addition, EPA did not attempt to gather views from, or determine the needs of, the public, which is a significant user of EPA libraries. Furthermore, statistics on library use across the network, which EPA relied on, in part, to decide whether and how to reorganize the network, were incomplete and inconsistent. EPA is now developing procedures for keeping complete and accurate use statistics. Such statistics would allow EPA to make more informed decisions regarding the use of its libraries and to determine variation in use on the basis of factors such as where the library is located organizationally, whether it is managed under a separate contract or in combination with related information service functions, or where it is located physically in relation to other publicly accessed areas. EPA did not conduct a complete inventory of libraries’ information resources before beginning to close them. For example, journal subscriptions are a significant cost to the agency, and these subscriptions are duplicated throughout the network. However, EPA did not completely assess duplication and the potential for reducing duplication before beginning to reorganize the network. EPA did not fully characterize and assess organizational, business, and technological factors that would either enable or constrain an optimal level of library services. For example, EPA did not review, in advance of the library closures, leading practices in digitizing library materials to ensure that such materials are digitized and cataloged correctly. EPA is now undergoing a third-party review of its current digitization standards and procedures, which will inform and serve as a benchmark for the development of EPA’s future digitization procedures for library materials. In addition, EPA is relying more on NEPIS to distribute EPA reports electronically, but it only began integrating NEPIS with OLS in late summer 2007 to ensure that hard copy reports digitized in NEPIS are also available through OLS. According to EPA officials, electronic links were established in OLS to all 26,000 reports in NEPIS by the end of December 2007. Many of the electronic reports in NEPIS are born digital and not available in hard copy. EPA did not develop and fully evaluate alternative models of library services that described the benefits, costs, opportunities, and challenges of each approach. In its Optional Approaches report, EPA describes five different service options: (1) current status—where a library chooses to make no changes to the library operation; (2) network node approach— where a library continues to provide its core services on-site, but purchases or sells some services from or to the library network; (3) liaison approach—where a library greatly reduces or eliminates its physical collection and the labor needed to maintain it with many services purchased from the network; (4) virtual services approach—where a library maintains no library presence on-site, but has a mechanism through which staff can purchase services and resources directly from the network; and (5) deferral of responsibility—where a library ceases all affiliation with the network, forcing staff to procure information services on their own. The report explored the estimated costs associated with each option and recommended a mix of at least three network nodes, three liaison locations, two virtual services locations, and participation of at least one environmental center. However, the alternatives were based on the report’s assessment of the regional libraries, rather than on all of the libraries in the network, and it did not explore the benefits, along with the costs, of the various options, including the recommended “mixed” option. Thus, each library had to decide whether it would close without having information on what mix of closed and open libraries would present the most beneficial option and on where to best geographically locate Centers of Excellence or repository libraries. EPA did not, in advance of the reorganization, review the existing policy framework for library resources and develop revisions to this framework to address the roles and responsibilities of regional offices, centers, laboratories, and program offices in providing information services to staff. Until April 2007, EPA relied on a library policy established in July 1987 that, by 2007, was based on an outdated organizational scheme—the library network under the coordination of an office that did not exist. As we have previously discussed, EPA developed an interim library policy in April 2007, after beginning the reorganization, and is currently developing new library procedures stemming from the policy. According to EPA officials, EPA decided to reorganize its libraries without fully completing the recommended analyses because it wanted to reduce its fiscal year 2007 funding for the OEI headquarters and regional office libraries by $2 million. However, this claimed savings was not substantiated by any formal EPA cost assessment. According to EPA officials, the $2 million funding reduction was informally estimated in 2005 with the expectation that EPA would have been further along in its library reorganization effort prior to fiscal year 2007. Furthermore, EPA did not comprehensively assess library network spending in advance of the $2 million estimation of budget cuts. According to OPPTS officials, in December 2005, they decided to close the Chemical Library to expand accessability to library materials through digitization and to achieve related cost savings. Although they planned on closing the Chemical Library at a later date, they moved to close it before the start of fiscal year 2007 because the space was to be reconfigured. By not completing a full assessment of its library resources and not conducting a benefit-cost analysis of various approaches to reorganizing the network, EPA did not justify the reorganization actions in a way that fully considered and ensured adequate support for the mission of the library network, the continuity of services provided to EPA staff and the public, the availability of EPA materials to a wider audience, and the potential cost savings. In effect, EPA attempted to achieve cost savings without (1) first determining whether potential savings were available and (2) performing the steps that its own study specified as necessary to ensure that the reorganization would be cost-effective. Communicating with and soliciting views from staff and other stakeholders are key components of successful mergers and transformations. We have found that an organization’s transformation or merger is strengthened when it makes public implementation goals and a timeline to build momentum and show progress. By demonstrating progress toward these goals, the organization builds staff support for the changes. An organization’s transformation and merger is also strengthened when the organization establishes an agencywide communication strategy and involves staff to obtain their ideas, which among other things, involves communicating early and often to build trust, ensuring consistency of message, and incorporating staff feedback into new policies and procedures. Generally, such a strategy helps gain staff ownership for the changes and alleviates uncertainties. Finally, transformations and mergers are strengthened when organizations learn from and use leading practices to build a world-class organization, such as those for library services. However, we found that (1) EPA’s August 2006 library plan did not inform stakeholders on the final configuration for the library network or implementation goals and a timeline; (2) EPA lacked an agencywide communication strategy for EPA staff and outside stakeholders, and the extent to which it involved EPA staff and stakeholders to obtain their views was limited; and (3) EPA did not solicit views from industry experts regarding the digitization of library materials and other issues. However, EPA is currently reaching out to both EPA staff and external stakeholders. EPA’s communication procedures were limited or inconsistent because EPA acted quickly to make changes in response to a proposed fiscal year 2007 funding reduction, and because of the decentralized nature of the library network. Through its August 2006 library plan, EPA generally informed internal and external stakeholders of its vision for the reorganized library network, noting that EPA would be moving toward a new model of providing library services to EPA staff and the public, and that this new model would result in a more coordinated library network where more services would be available online. The plan discussed the creation of Library Centers of Excellence and also noted that as a part of the transition to the new library services model, the headquarters, Region 5, Region 6, and Region 7 libraries would close. We found, however, that EPA did not provide sufficient information to stakeholders on how the final library network would be configured or the implementation goals and timeline it would take to achieve this final configuration. More specifically, the plan did not inform readers that OPPTS would close its Chemical Library, and that other libraries would reduce their hours of operation or make other changes to their library services; provide any detail on which additional libraries would, in an effort to align to the new library service model, change their operations or library collections in the future; and inform stakeholders of the intended outcome of the reorganization effort, including what the final configuration of the reorganized library network would look like, and the implementation goals and timeline needed to achieve this final configuration. OEI officials told us that the purpose of the plan was to provide a framework for how new services would be provided and not for the physical configuration of the network. OEI officials also told us that they were unsure of what the ultimate library model will look like and whether additional libraries would close in the future, since the decision to close is a local decision. Without a clear picture of what EPA intends to achieve with the library network reorganization and the implementation goals and timeline to achieve this intended outcome, EPA staff may not know if progress is being made, which could limit support for the network reorganization. Because EPA’s library structure was decentralized, EPA did not have an agencywide communication strategy to inform EPA staff of, and solicit their views on, the changes occurring in the library network, leaving that responsibility to each EPA library. As a result, EPA libraries varied considerably in the information they provided to staff on library changes. For example, EPA officials from the headquarters and three regional office libraries that closed explicitly informed EPA staff of when the libraries would be closed to physical access. However, EPA officials from the OPPTS Chemical Library did not inform its staff and users of the Chemical Library closure. Rather, these EPA officials informed them that they would be reducing library services and then closed the library without notice or explanation to EPA staff. These officials acknowledge that they could have made a more thorough effort to inform library users about the timing of the library closure. We also found that some of the closed regional libraries informed their staff of the changes occurring at their libraries earlier than the closed headquarters library or other closed regional libraries, and that some libraries communicated changes to their staff more frequently than others. Officials from Regions 5 and 6, for example, began to inform their respective staff of their library closures about 5 months before their libraries closed, whereas officials from Region 7 and headquarters informed their staff of the changes occurring at their libraries only a few weeks prior to their closures. However, we also found that Region 7 officials communicated changes occurring at their library to their staff more frequently after it closed as compared with the other closed regional and headquarters libraries. The extent to which EPA libraries solicited views from EPA staff also varied by library. Recognizing the decentralized nature of the library network, EPA’s Optional Approaches report suggested that regional management speak with the unions representing their staff to determine what their staff’s library needs are, assure them that changes in the provision of library services would support their needs, and prepare the staff for potential future changes in accessing information resources. However, management in only a few of the regions solicited views from their regional staff through discussions with their unions. According to most of the union representatives we talked with from the libraries that closed, reduced their hours of operation, or changed the way that they provided library services to their users, they were not asked by management to provide their views on the changes that were occurring at their library. At the national level, OEI officials stated that they briefed union representatives on several occasions prior to the reorganization, and that they also provided the union with a draft library plan for review and comment. At the time of our review, EPA had entered into arbitration with the union to resolve the union grievance regarding the reorganization. Management from only a few of the regional libraries solicited views from their regional science council—an employee group located in each region composed of EPA scientists and technical specialists. For example, officials in Region 1 explained that in an effort to inform management on how best to optimize library services, given the reduction in the budget, management asked its regional science council to poll its scientists, engineers, and technical staff on the library services they most value in the region. In addition, management in Region 5 did not ask the regional science council to provide input on the Region 5 library closure. However, the regional science council in this region submitted a memorandum to management expressing concerns regarding the library closing, and potential impacts the closing would have on the duties performed by EPA scientists and engineers. In addition, EPA generally did not communicate with and solicit views from external stakeholders, such as the public, before and during the reorganization because the agency was moving quickly to make changes in response to proposed funding cuts. Of the libraries that closed, only the headquarters library informed the public of the changes occurring at its library by posting a notification in the Federal Register. The notification informed members of the public on how they could access EPA documents held in the headquarters repository library and or in electronic format. However, the notification was published in the Federal Register just 10 days before the library was slated to close and become a repository library. Furthermore, the notification did not provide public users of the library with an opportunity to provide comments on the changes. Rather than publishing a Federal Register notice to inform the public of changes or to obtain public views, some of the closed libraries announced the closures to the public through their individual library Web sites after the closures had already occurred. In early 2007, however, we found that EPA’s Web site did not include links to the closed regional libraries’ Web sites. As a result, members of the public had no way of knowing that the library had closed or of knowing how to access materials that were housed in these libraries. EPA also did not fully communicate with and solicit views from professional library associations while planning and implementing its library reorganization. EPA did meet with the American Library Association, a professional library association, on a few occasions, but did so later in the reorganization planning process. Furthermore, other professional library associations, such as the Association of Research Libraries, were not consulted at all by EPA officials before or during the library reorganization. Without an agencywide communication strategy—which involves communicating early and often, ensuring consistency of message, and obtaining views from both EPA staff and external stakeholders—staff ownership for the changes may be limited, and they may be confused about the changes. Furthermore, EPA cannot be sure that the changes are meeting the needs of EPA staff and external stakeholders. When developing digitization procedures for library materials, which were noted in the library plan, EPA did not obtain the views of federal experts, such as the Government Printing Office and the Library of Congress, as well as industry experts. These experts could have provided leading practice information and guidance on digitization processes and standards for library materials. As such, EPA cannot be sure that it is using leading practices for library services. Recognizing the need to communicate with and solicit the views of staff, external stakeholders, and industry experts, EPA has recently increased its outreach efforts. In October 2007, for example, OEI asked local unions throughout the agency to comment on a draft of the 2008 library plan, which includes an overview of EPA’s library services for staff and the public and a vision for the future of the EPA library network. Furthermore, since April 2007, OEI has (1) attended and presented information at a stakeholder forum hosted by the American Library Association at which a number of professional library associations—including the American Association of Law Libraries, Special Libraries Association, and Medical Library Association—were present and (2) attended and presented information at a number of professional library association conferences. OEI has also started working with the Federal Library Information Center Committee, a committee managed by the Library of Congress, to develop a board of advisers. This board of advisers—comprising senior library staff at various agencies across the federal sector—is to respond to EPA administrators and librarians’ questions about the future direction of EPA libraries. Furthermore, the board of advisers is to serve as one of several experts that EPA can use as sounding boards and informal advisers to help guide the next stages of the library reorganization. Separately, EPA has begun to solicit advice from library experts on procedures EPA is developing for digitization. According to OEI officials, they will ask American Library Association officials and other industry experts to review the procedures before they are made final. EPA does not have a strategy to ensure the continuity of library services and does not know the full effect of the reorganization on library services. However, several changes it implemented may have impaired access to library materials and services. EPA does not have a strategy that ensures the continuation of services to its staff or the public. Based on our review of key practices and implementation steps to assist mergers and organizational transformations, organizations that are undergoing change should seek and monitor staff attitudes and take the appropriate follow-up actions. While EPA’s library plan describes the reorganization effort as a “phased approach,” it does not provide specific goals, timelines, or feedback mechanisms needed to allow the agency to measure performance and monitor user needs to ensure a successful reorganization while maintaining quality services. The plan recognizes the need to provide training to instruct affected staff on the new services provided, but it does not recognize the need to obtain feedback from library users affected by the changes to identify any concerns they may have in using the new services. EPA has begun to provide training to some staff affected by the reorganization. The agency has also collected staff feedback from some of the libraries; however, such efforts have been random and have not included all of the affected library users nor a statistically valid sample of such users. For example, the Research Triangle Park library solicits feedback from EPA staff on the services provided through the service agreements—and according to EPA officials, the responses so far have been mostly positive; however, the Region 3 library, which also provides services through a service agreement, does not collect such feedback. Without a systematic approach for obtaining feedback from those affected by the reorganization, EPA cannot know whether, or to what extent, the library reorganization has impaired the ability of library users to access environmental information, and if it has impaired their ability, what corrective actions it would need to take to improve services. To balance the continued delivery of services with merger and transformation activities, it is essential to ensure that top leadership drives the transformation. However, during the reorganization, EPA did not have a national program manager for the library network to oversee and guide the reorganization effort. After the position became vacant in late 2005, it was not filled until May 2007. Without a national program manager for the library network, EPA did not have an official providing an essential level of oversight and guidance that could have ensured that libraries dispersed and disposed of materials properly and in a consistent manner. For example, we found that a universal list of materials available for dispersal from the libraries that were closing was not produced; rather, libraries announced available materials on several different occasions, and the Regions 5 and 6 libraries began dispersing materials prior to the library plan being finalized. In addition, libraries that were closing were not required to develop a list of materials that were to be dispersed or disposed of. Without a program manager in place to consolidate lists of materials to be dispersed and disposed of, some libraries may not have been aware of materials available that could be used for its collection. Because EPA’s library plan was unclear and lacked specific procedures and because EPA provided very little oversight, guidance, or control over the reorganization process, it cannot ensure that libraries properly and consistently dispersed or disposed of its library collection, and that library services will continue to be provided to its staff and the public. Several changes that EPA made to its library network may have impaired the continued delivery of library materials and services to its staff and the public. First, according to EPA’s library plan, the agency is moving to deliver more materials and services online. According to EPA estimates, the combined EPA collection in 2003 included 504,000 books and reports; 3,500 journals; 25,000 maps; and 3,600,000 information objects on microfilm. Since the reorganization began, the number of documents in NEPIS increased from 10,700 documents to 26,000, after the unique EPA documents from some of the libraries were digitized and entered into the system. EPA expects to have about 51,000 documents in NEPIS after all hard copy reports are digitized. However, according to EPA officials, because of copyright issues, only unique reports produced by or for EPA will be digitized. Therefore, only about 10 percent of EPA’s holdings of books and reports will be available electronically in NEPIS. If the material is not available electronically, EPA staff in locations where libraries have closed will receive the material through an interlibrary loan—delaying access to the materials from 1 day to up to 20 days. According to EPA officials, most interlibrary loan requests are completed in less than 5 days. Second, with more library materials and services becoming available online, EPA will be relying more on its electronic databases, such as NEPIS and OLS, to identify and distribute library materials. However, EPA has only just recently begun to integrate these systems to allow for easier identification and retrieval of materials that were digitized or that have always been available electronically; nor has it updated these systems to reflect the current location of materials that have been dispersed or disposed of to ensure that staff and the public can identify and receive library materials through them. Although dispersal procedures in EPA’s library plan state that the libraries that are closing are responsible for updating OLS, we found that they have not done so. According to EPA officials where libraries had closed, the staff in the receiving libraries were responsible for updating OLS. As a result of such confusion and lack of coordination, for example, all Chemical Library materials still appear as being physically located at the library through OLS, although the library has been closed for over 1 year. Third, EPA cannot ensure that the service agreements between libraries that had closed and other EPA libraries will be effective. Specifically: Only two of the seven service agreements that EPA established were tested in advance to ensure that the services being provided were timely and effective. Even in these cases, EPA did not consider the full range of requests that may be received from the locations planning to close or reduce services. For example, the service agreement between the Cincinnati library and EPA Region 5 was tested for only 4 weeks in 2006, just before the library was to officially close on August 28, 2006. During these weeks, the number of requests made were only 3 percent of the total research and interlibrary loan requests made in Region 5 during fiscal year 2006. This does not provide a realistic assessment of the Cincinnati library’s ability to fulfill research requests and interlibrary loans in a timely and effective fashion. Even for this 3 percent, EPA surveyed only a sample of staff to determine their satisfaction with the library services. Library materials and services provided under the service agreements are based on a fee-for-service arrangement, which could constrain access to information. For example, due to reduced budgets, prior to the reorganization, OPPTS required management approval of research requests and other service requests. If the agency finds that costs are more than anticipated under the new fee-for-service model, it may require such approvals to try to limit costs. Such actions could limit the research that EPA staff conduct and also delay research efforts. EPA officials have stated that they believe the service agreements provide adequate services and, thus far, believe that they are cost-effective based on preliminary results. The Centers of Excellence libraries that provide services to the locations that closed their libraries are all based in the Eastern time zone, which may constrain when services can be provided, especially for EPA staff located in the West. Although EPA is attempting to continue to meet the needs of its staff, it does not have a plan in place to ensure the continuation of library services for the public, such as state and local government environmental agencies, environmental groups, and other nongovernmental organizations. EPA’s library plan stated that the locations where libraries have closed would have a plan to manage public inquiries, and that such locations would refer public requests for information to the public affairs office or program staff. However, we found that many of the locations where libraries closed have not developed such a plan. In addition, the service agreements with the locations where libraries closed only refer to how services would be provided to EPA staff and not the public. Finally, EPA may have inadvertently limited access to information because it did not determine whether federal property management regulations applied to the dispersal and disposal of library materials and hence may have disposed of materials that should have been retained. To ensure that federal property is reused to the extent possible, regulations generally require that agencies report surplus property to GSA, which will attempt to find another agency that needs it. If no federal agency needs the property, it may be sold to the public or donated to state or local governments or nonprofit entities. Although agencies may discard property that is subject to the regulations, they must first make a written determination that the property has no value. While EPA’s Fiscal Year 2007 Library Plan included dispersal and disposal criteria and procedures for libraries to follow when deciding on its collections, these criteria and procedures were vague and did not incorporate the federal property regulations. According to a Region 3 EPA official who developed the dispersal and disposal criteria, a clear answer from GSA and from EPA property management officials was not obtained regarding the applicability of federal property management regulations to library materials in the time available before the plan was issued. Furthermore, many of the individual libraries that had dispersed or disposed of library materials did not contact GSA, EPA property management officials, or EPA legal counsel to determine whether federal property management regulations applied, and did not consider the applicability of the federal property management regulations before dispersing or disposing of their library materials. As a result, EPA libraries dispersed and disposed of library materials in a manner inconsistent with federal property management regulations. For example, the Regions 5 and 6 libraries gave materials to private companies, and the OEI headquarters library and the Chemical Library discarded materials without first determining that they had no monetary value. Furthermore, several journal titles from the Chemical Library were disposed of, despite the fact that EPA’s Office of Enforcement and Compliance Assurance offered to take the materials and archive them. EPA officials stated that there was lack of clarity regarding whether library materials, such as books and journals, were subject to federal property management regulations. EPA officials stated that they will look into this matter more and will engage federal property management officials at GSA regarding what steps should be taken. The several different program offices responsible for the EPA libraries in the network generally decide how much of their available funding to allocate to their libraries out of larger accounts that support multiple activities. There is no line item for EPA libraries included in the President’s budget nor in EPA’s more detailed budget justification to Congress. Until fiscal year 2007, library spending had remained relatively stable, ranging from about $7.14 million to $7.85 million between fiscal years 2002 and 2006. OEI, which is the primary source of funding for the regional libraries, typically provides funding for them through each region’s support budget, and generally gives regional management discretion on how to allocate this funding among the library and other support services, such as information technology. The regions also obtain a much smaller portion of their library funding from other program offices, such as Superfund, to store and maintain information on the National Priorities List. The extent to which other program offices provide funding to the regional libraries varies. For the OEI headquarters library and the regional office libraries, however, the approach to library support changed in fiscal year 2007. OEI management decided to reduce library funding by $2 million from $2.6 million in enacted funding in fiscal year 2006 for the OEI headquarters and regional office libraries—a 77 percent reduction for these libraries and a 28 percent reduction in total library funding. After $500,000 of the $2 million reduction was applied to the headquarters library, the regional administrators together decided that the remaining $1.5 million reduction should be spread equally across all regions, rather than by staffing ratios in each region or previous years’ spending. However, because it was one of the agencies included in the full-year continuing appropriations resolution for fiscal year 2007, EPA operated near fiscal year 2006 funding levels. According to EPA, OEI restored $500,000 to the library budget in fiscal year 2007 to support reorganization activities. According to OPPTS officials, while OPPTS did not face a budget cut for fiscal year 2007, it decided to close its Chemical Library nevertheless to improve the library’s online services and achieve cost savings. For EPA staff who had used the libraries that are now closed, EPA has established service agreements with Centers of Excellence. These libraries provide materials and services on a fee-for-service basis charged to the program office whose staff made the request. Funding is provided either as a lump sum to these libraries at the beginning of the fiscal year, which is drawn from as needed, or funding is provided on a monthly basis. The libraries provide monthly reports to the locations being served and coordinate with a liaison at these locations. EPA estimates that services provided under these agreements will cost approximately $170,000 for fiscal years 2007 and 2008. When planning for the reorganization of the library network, EPA recognized that the responsible dispersal, disposal, and digitization of an EPA library collection is a major project requiring planning, time, and resources. For example, when the relatively small library in Edison, New Jersey, closed in 2004, EPA estimated that it cost $150,000 to disperse 1,000 boxes of materials. EPA did not allocate funds specifically to help the closing libraries manage their collections. According to EPA, the funding for library closures was taken into account during the budget process. As a result, the program or regional office responsible for the library used its usual library funding to pay for closing costs. The program offices that closed their libraries did not track closing costs, such as boxing and shipping materials. However, EPA estimated that it cost approximately $80,000 to digitize 15,260 titles between December 2006 and January 2007. This cost was paid for by OARM under an already existing contract. EPA recognized it needed to ensure that, during and following the reorganization, its library network would continue to provide environmental information to EPA staff and external stakeholders. Accordingly, the agency’s reorganization planning identified procedures to follow that would enable the libraries to continue the availability, quality, and timeliness of library materials and services. However, because of a proposed reduction in funding for the OEI headquarters and regional office libraries in fiscal year 2007, EPA did not fully implement these procedures, instead it acted quickly to make changes. In addition, EPA did not rigorously conduct outreach efforts with EPA staff, external stakeholders, and outside experts, which we have recognized as steps necessary for a successful merger or transformation. As a result, support for the library reorganization may be limited, and staff may be confused about the changes. Furthermore, EPA cannot be sure that the changes are meeting the needs of EPA staff and external stakeholders, and that it is incorporating leading practices for library services and the digitization of materials. Finally, EPA did not implement best practices that would allow it to measure or monitor the effects of the reorganization or provide oversight of the process, despite EPA having made changes to its library network that may have negatively affected how materials and services are provided to its staff and the public. For example, EPA did not disperse and dispose of library materials in accordance with federal property management regulations or its own procedures and, therefore, may have disposed of materials that are of value and needed for use by staff and the public. Without sufficient monitoring or oversight of the process, EPA cannot be sure of the extent to which the library reorganization has degraded library services, if at all, and therefore cannot take corrective actions if necessary. To ensure that critical library services are provided to EPA staff and other users, we recommend that the Administrator of EPA continue the agency’s moratorium on changes to the library network until the agency incorporates and makes public a plan that includes the following four actions: Develop a strategy to justify its reorganization plans by (1) evaluating and determining user needs for library services; (2) taking an inventory of EPA information resources and determining the extent to which these resources are used; (3) evaluating technological factors, such as digitization procedures and integration of online databases, to ensure an optimal level of services; (4) evaluating and conducting a benefit-cost assessment for each alternative approach for the network, including the approach that existed before the reorganization; and (5) reviewing and revising, as appropriate, the existing policy and procedures that guide the library network. Improve its outreach efforts by developing a process that (1) informs stakeholders of the final configuration of the library network, and the implementation goals and timeline to achieve this configuration; (2) communicates information to stakeholders early, often, and consistently across all libraries, and solicits the views of EPA staff and external stakeholders; and (3) obtains the views of industry experts to determine leading practices for library services. Include a process that (1) ensures sufficient oversight and control over the reorganization process, (2) continuously and consistently monitors the impact of the reorganization on EPA staff and the public, and (3) takes corrective actions as necessary to provide the continued delivery of services. Implement procedures that ensure that library materials are dispersed and disposed of consistently and in accordance with federal property management regulations. We provided EPA with a draft of this report for its review and comment. In its written response, EPA agreed with our recommendations, stating that it will prioritize the recommendations when moving forward on modernizing the library network. EPA also provided comments to improve the draft report’s technical accuracy, which we have incorporated as appropriate. EPA’s letter is reprinted in appendix III. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Administrator of EPA, and other interested parties. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To review the Environmental Protection Agency’s (EPA) library network reorganization, we (1) determined the status of, and plans for, the library network reorganization; (2) evaluated EPA’s rationale for its decision to reorganize the library network; (3) assessed the extent to which EPA communicated with and solicited views from EPA staff and external stakeholders in planning and implementing the reorganization; (4) evaluated the steps EPA has taken to maintain the quality of library services following the reorganization, both currently and in the future; and (5) determined how EPA is funding the library network and its reorganization. We limited our review to the 26 libraries that were part of the EPA library network. According to EPA officials, a library is considered part of the network if its collections are listed in the agency’s Online Library System (OLS). Generally, we also conducted the following activities: Reviewed relevant EPA documents, plans, policies, guidance, and procedures as well as related laws and requirements pertinent to the library network and the reorganization effort. Visited the Office of Environmental Information’s (OEI) headquarters library and the Office of Prevention, Pesticides, and Toxic Substances’ Chemical Library—both located in Washington, D.C.; the Region 10 library in Seattle, Washington; and the Office of Administration and Resources Management (OARM) library in Cincinnati, Ohio. We visited these libraries because the headquarters library closed physical access to its library space and transitioned into a repository library; the Chemical Library closed physical access to its library space and dispersed and disposed of its library materials; the Region 10 library reduced its hours of operation; and the OARM library in Cincinnati was identified by EPA as a repository library, a Center of Excellence, and the facility was responsible for digitizing library materials from the closed EPA libraries. Interviewed representatives from Lockheed Martin and Integrated Solutions and Services—because these two companies digitized and electronically indexed library materials through already existing contracts with OARM in Cincinnati—and visited a Lockheed Martin facility to observe the digitization process. Interviewed EPA librarians, library managers, and program office and regional office managers for the 26 libraries in EPA’s library network. When possible, we corroborated information provided to us by EPA officials during the interviews with relevant documentation. For each of our objectives, some analysis was based on documentation and information provided to us by EPA officials. To the extent possible, we tried to corroborate this information. However, we did not independently verify this information or assess whether it was complete or accurate. In addition, we conducted work that was specific to each of the report’s objectives. To determine the status of and plans for the library network reorganization, we analyzed information that EPA libraries provided to us on the operating status of the libraries as well as materials that have been digitized, dispersed to other EPA and non-EPA libraries, or disposed of as a part of the reorganization effort. We also reviewed drafts and final versions of EPA procedures and criteria for digitizing, dispersing, and disposing of EPA library materials. To evaluate EPA’s rationale for reorganizing the library network, we conducted the following activities: Reviewed documents that EPA developed before the reorganization in fiscal year 2007. One of these documents was EPA’s 2004 study on the costs and value of EPA’s libraries. We did not assess the robustness and adequacy of the methodology and data that EPA used for this study. However, we used this study’s recommendations for information on how to further assess and determine the future of the library network to guide our assessment of EPA’s subsequent evaluation efforts of the library network. We spoke with a contract official from Stratus Consulting, which helped develop the 2004 study on the costs and value of EPA’s libraries, as well as with a researcher from Simmons College who helped conduct an independent review of the study. In addition, we reviewed federal guidelines from the Office of Management and Budget on benefit-cost analyses. We also assessed EPA’s survey of library users, examining the adequacy of the response rate of the survey and survey questions. We found the 14 percent response rate to EPA’s survey not to be adequate for EPA’s purpose because the response rate was low and because EPA did not do any nonresponse analyses to show that those 14 percent who responded were representative of the target population. To determine whether EPA’s survey contained questions to adequately characterize the needs of library users in reorganizing the library network, we looked for survey questions that assessed how and how often users used the library space, the library holdings, and the librarian in performing their jobs; the utility of, and satisfaction with, each resource; and to what extent the library materials were available electronically versus in hard copy. Asked each of the 26 libraries to provide us with data on the number of walk-ins to the library and other use data between fiscal years 2000 and 2006. We reviewed these data to determine their reliability and sufficiency for EPA to use as a basis for deciding to reorganize the library network. To determine the reliability and sufficiency of EPA’s library use data, we checked whether all libraries kept such statistics and whether enough years of data were available to detect a trend in the level of use. We found that not all libraries tracked such library use data and some libraries only kept data for a limited number of years. Assessed the National Environmental Publications Internet Site (NEPIS) and OLS to determine the extent of integration between the two systems and to determine how the location of library materials that have been dispersed or disposed of are being updated in OLS. Assessed the comprehensiveness of EPA’s efforts to evaluate alternative models of library services. To assess EPA’s efforts to communicate with and solicit the views of EPA staff and external stakeholders in planning and implementing the reorganization, we reviewed our past work on key practices and implementation steps to assist mergers and organizational transformations and compared these key practices and implementation steps with EPA’s reorganization effort (app. II provides more details on these key practices and implementation steps). More specifically, to determine EPA’s efforts to communicate with and solicit input from stakeholders, we reviewed e-mails, notices, and memorandums from EPA library management and program office and regional office management to EPA staff. We also interviewed local union representatives from headquarters and all of EPA’s regional offices. Furthermore, we interviewed regional science council representatives from most of the regional offices. The science councils are located in each regional office and consist of EPA scientists and technical specialists. To determine the extent to which EPA communicated with and solicited views from outside stakeholders, we interviewed representatives from several professional library associations and other external stakeholder groups, such as the American Library Association, the Association of Research Libraries, the American Association of Law Libraries, the Special Libraries Association, the Library of Congress’ Federal Library and Information Center Committee, and the Union of Concerned Scientists. We also reviewed information that EPA provided to the public via the EPA Web site or, when applicable, Federal Register notices. In evaluating the steps that EPA has taken to maintain the quality of library services following the reorganization both currently and in the future, we reviewed our past work on key practices and implementation steps to assist mergers and organizational transformations and compared these key practices and implementation steps with EPA’s reorganization effort (app. II provides more details on these key practices and implementation steps). Furthermore, we reviewed federal property management regulations regarding the dispersal and disposal of federal property, and assessed whether EPA followed these regulations. We also reviewed drafts and final versions of EPA procedures and criteria for dispersing and disposing of EPA library materials. Separately, we determined the possible effects of changes to the library network by (1) determining and evaluating the total number of library materials that would be digitized and made available in NEPIS, and the length of time it would take a user to receive materials via interlibrary loan; (2) evaluating the accuracy of information in NEPIS and OLS; and (3) reviewing and evaluating service agreements between libraries. Finally, we reviewed the roles and responsibilities of the EPA library network management. To determine funding for the library network and its reorganization, we obtained information on library funding from each of the 26 libraries in the network between fiscal years 2002 and 2007. Because EPA does not specifically track funding for the libraries, the information provided contained a mix of outlays for some fiscal years and budget authority for other fiscal years. In addition, the information provided from each of the libraries only reflected spending by the library and not the source of the funds. For example, a large portion of the funding for the regional office libraries come from OEI, but funding is also received from other EPA program offices, such as Superfund. Furthermore, the funding data received from the libraries contained a mix of funding for contract support; library staff salaries; and acquisition costs for books, journals, and other materials. We interviewed EPA officials to assess data reliability, but we did not independently verify the accuracy and completeness of the data that they provided. After discussions with EPA officials, we decided not to include a table showing funding for each library for fiscal years 2002 through 2007 because of concerns with its reliability. Separately, we interviewed library management from each of the 26 libraries to obtain information on the costs of the reorganization. We conducted this performance audit from December 2006 through February 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Define and articulate a succinct and compelling reason for change. Balance continued delivery of services with merger and transformation activities. Establish a coherent mission and integrated strategic goals to guide the transformation. Adopt leading practices for results-oriented strategic planning and reporting. Focus on a key set of principles and priorities at the outset of the transformation. Embed core values in every aspect of the organization to reinforce the new culture. Set implementation goals and a timeline to build momentum and show progress from day one. Make public implementation goals and timeline. Seek and monitor employee attitudes and take the appropriate follow- up actions. Identify cultural features of merging organizations to increase understanding of former work environments. Attract and retain key talent. Establish an organizationwide knowledge and skills inventory to exchange knowledge among merging organizations. Dedicate an implementation team to manage the transformation process. Establish networks to support implementation team. Select high-performing team members. Use the performance management system to define responsibility and ensure accountability for change. Adopt leading practices to implement effective performance management systems with adequate safeguards. Establish a communication strategy to create shared expectations and report related progress. Communicate early and often to build trust. Ensure consistency of message. Encourage two-way communication. Provide information to meet specific needs of employees. Involve employees to obtain their ideas and gain their ownership for the transformation. Use employee teams. Involve employees in the planning and sharing of performance information. Incorporate employee feedback into new policies and procedures. Delegate authority to the appropriate organizational levels. Adopt leading practices to build a world-class organization. In addition to the contact named above, Ed Kratzer, Assistant Director; Nathan A. Morris; Roshni Davé; and Carol Herrnstadt Shulman made key contributions to this report. Also contributing to this report were Mike Dolak, Carol Henn, Kunal Malhotra, Bonnie Mueller, Lynn Musser, Omari Norman, Kim Raheb, Sarah Veale, and Greg Wilmoth. | Established in 1971, the Environmental Protection Agency's (EPA) library network provides staff and the public with access to environmental information. Its 26 libraries contain a wide range of information and resources and are located at headquarters, regional offices, research centers, and laboratories nationwide. In 2006, EPA issued a plan to reorganize the network beginning in fiscal year 2007. The plan proposed closing libraries and dispersing, disposing of, and digitizing library materials. GAO was asked to assess (1) the status of, and plans for, the network reorganization; (2) EPA's rationale for reorganizing the network; (3) the extent to which EPA has communicated with and solicited the views of EPA staff and external stakeholders in conducting the reorganization; (4) EPA's steps to maintain the quality of library services after the reorganization; and (5) how EPA is funding the network and its reorganization. For this study, GAO reviewed pertinent EPA documents and interviewed EPA officials and staff from each of the libraries. Since 2006, EPA has implemented its reorganization plan to close physical access to 4 libraries. In the same period, 6 other libraries in the network decided to change their operations, while 16 have not changed. Some of these libraries have also digitized, dispersed, or disposed of their materials. Since the reorganization, EPA has begun drafting a common set of agencywide library procedures and has hired a program manager for the network. While these procedures are under development, however, EPA has imposed a moratorium on further changes to the network in response to congressional and other expressions of concern. EPA's primary rationale for the library network reorganization was to generate cost savings by creating a more coordinated library network and increasing the electronic delivery of services. However, EPA did not fully follow procedures recommended in a 2004 EPA study of steps that should be taken to prepare for a reorganization. In particular, EPA did not fully evaluate alternative models, and associated costs and benefits, of library services. EPA officials stated that they needed to act quickly to reorganize the library network in response to a proposed fiscal year 2007 funding reduction. EPA did not develop procedures to inform staff and the public on the final configuration of the library network, and EPA libraries varied considerably and were limited in the extent to which they communicated with and solicited views from stakeholders before and during the reorganization effort. In particular, EPA's plan did not include information that the Chemical Library was to close, and EPA did not inform staff or the public until after the fact. EPA's communication procedures were limited or inconsistent because EPA acted quickly to make changes in response to a proposed fiscal year 2007 funding reduction, and because of the decentralized nature of the library network. EPA is currently increasing its communication efforts. EPA does not have a post-reorganization strategy to ensure the continuity of library services and has not yet determined the full effect of the reorganization on library services. Moreover, EPA has recently made several changes that could have impaired user access to library materials and services. For example, EPA did not determine whether federal property management regulations applied to the dispersal and disposal of library materials before it closed the libraries. Furthermore, EPA lacked oversight of the reorganization process and does not have procedures that would allow the agency to measure performance and monitor user needs. Several different EPA offices are responsible for the libraries in the network. Each office generally decides how much funding to allocate to the libraries for which it is responsible and how to fund their reorganization. However, when faced with a proposed budget reduction of $2 million in fiscal year 2007, EPA specifically directed that these offices reduce funding for their libraries and did not specify how to achieve the reduction. Additional funds were not allocated to assist offices in closing their libraries. |
In 1936, following the enactment of the Social Security Act of 1935, the newly-formed Social Security Board (which later became SSA) created the 9-digit SSN to uniquely identify and determine Social Security benefit entitlement levels for U.S. workers. Originally, the SSN was not intended to serve as a personal identifier but, due to its universality and uniqueness, government agencies and private sector entities now use it as a convenient means of identifying people. The number uniquely links identities across a very broad array of public and private sector information systems. As of September 2016, SSA had issued approximately 496 million unique SSNs to eligible individuals. In 2006, the President issued an Executive Order establishing the Identity Theft Task Force to strengthen efforts to protect against identity theft. Because the unauthorized use of SSNs was recognized as a key element of identity theft, the task force assessed the actions the government could take to reduce the exposure of SSNs to potential compromise. In April 2007, the task force issued a strategic plan, which advocated a unified federal approach, or standard, for using and displaying SSNs. The plan proposed that OPM, OMB, and SSA play key roles in restricting the unnecessary use of the numbers, offering guidance on substitutes that are less valuable to identity thieves, and promoting consistency when the use of SSNs was found to be necessary or unavoidable. In response to the recommendations of the Identity Theft Task Force, OPM, OMB, and SSA undertook several actions aimed at reducing or eliminating the unnecessary collection, use, and display of SSNs. However, in our draft report, we determined that these actions have had limited success. OPM took several actions in response to the task force recommendations. Using an inventory of its forms, procedures, and systems displaying SSNs that it had developed in 2006, the agency took action to change, eliminate, or mask the use of SSNs on OPM approved/authorized forms, which are used by agencies across the government for personnel records. In addition, in 2007, OPM issued guidance to other federal agencies on actions they should take to protect federal employee SSNs and combat identity theft. The guidance reminded agencies of existing federal regulations that restricted the collection and use of SSNs and also specified additional measures. In addition to issuing this guidance, in January 2008, OPM proposed a new regulation regarding the collection, use, and display of SSNs that would have codified the practices outlined in its 2007 guidance and that also required the use of an alternate identifier. However, in January 2010, after reviewing comments it had received, OPM withdrew the notice of proposed rulemaking because the agency determined that it would be impractical to issue the rule without an alternate governmentwide employee identifier in place. In 2015, OPM briefly began exploring the concept of developing and using multiple alternate identifiers for different programs and agencies. As envisioned, an SSN would be collected only once, at the start of an employee’s service, after which unique identifiers specific to relevant programs, such as healthcare benefits or training, would be assigned as needed. However, officials from OPM’s Office of the Chief Information Officer stated that work on the initiative was suspended in 2016 due to a lack of funding. In May 2007, OMB issued a memorandum officially requiring agencies to review their use of SSNs in agency systems and programs to identify instances in which the collection or use of the number was superfluous. Agencies were also required to establish a plan, within 120 days from the date of the memorandum, to eliminate the unnecessary collection and use of SSNs within 18 months. Lastly, the memorandum required agencies to participate in governmentwide efforts, such as surveys and data calls, to explore alternatives to SSN use as a personal identifier for both federal employees and in federal programs. Since issuing its May 2007 memorandum requiring the development of SSN reduction plans, OMB has instructed agencies to submit updates to their plans and documentation of their progress in eliminating unnecessary uses of SSNs as part of their annual reports originally required by the Federal Information Security Management Act of 2002 and now required by the Federal Information Security Modernization Act of 2014 (FISMA). The Identity Theft Task Force recommended that, based on the results of OMB’s review of agency practices on the use of SSNs, SSA should establish a clearinghouse of agency practices and initiatives that had minimized the use and display of SSNs. The purpose of the clearinghouse was to facilitate the sharing of “best” practices—including the development of any alternative strategies for identity management— to avoid duplication of effort, and to promote interagency collaboration in the development of more effective measures for minimizing the use and display of SSNs. In 2007, SSA established a clearinghouse on an electronic bulletin board website to showcase best practices and provided agency contacts for specific programs and initiatives. However, according to officials in SSA’s Office of the Deputy Commissioner, the clearinghouse is no longer active. The officials added that SSA did not maintain any record of the extent to which the clearinghouse was accessed or used by other agencies when it was available online. Further, the officials said SSA has no records of when or why the site was discontinued. Based on their responses to our questionnaire on SSN reduction efforts in our draft report, all of the 24 CFO Act agencies reported taking a variety of steps to reduce such collection, display, and use. However, officials involved in the reduction efforts at these agencies stated that SSNs cannot be completely eliminated from federal IT systems and records. In some cases, no other identifier offers the same degree of universal awareness or applicability. Even when reductions are possible, challenges in implementing them can be significant. In our draft report, three key challenges were frequently cited by these officials: Statutes and regulations require collection and use of SSNs. In their questionnaire responses and follow-up correspondence with us, officials from 15 agencies who were involved in their agencies’ SSN reduction efforts noted that they are limited in their ability to reduce the collection of SSNs because many laws authorize or require such collection. These laws often explicitly require agencies to use SSNs to identify individuals who are engaged in transactions with the government or who are receiving benefits disbursed by federal agencies. Interactions with other federal and external entities require use of the SSN. In their questionnaire responses and follow-up correspondence with us, officials from 16 agencies noted that the necessity to communicate with other agencies and external entities limited their reduction efforts. Federal agencies must be able to cite a unique, common identifier to ensure that they are matching their information to the correct records in the other entities’ systems in order to exchange information about individuals with other entities, both within and outside the federal government. The SSN is typically the only identifier that government agencies and external partners have in common that they can use to match their records. Technological hurdles can slow replacement of SSNs in information systems. In their questionnaire responses and follow-up correspondence with us, officials from 14 agencies who were involved in their agency SSN reduction efforts cited the complexity of making required technological changes to their information systems as a challenge to reducing the use, collection and display of SSNs. Our preliminary results indicate that SSN reduction efforts in the federal government also have been limited by more readily addressable shortcomings. Lacking direction from OMB, many agencies’ reduction plans did not include key elements, such as time frames and performance indicators, calling into question the plans’ utility. In addition, OMB has not required agencies to maintain up-to-date inventories of SSN collections and has not established criteria for determining when SSN use or display is “unnecessary,” leading to inconsistent definitions across the agencies. Finally, OMB has not ensured that all agencies have submitted up-to-date status reports on their SSN reduction efforts and has not established performance measures to monitor progress on those efforts. Agency SSN Reduction Plans Lacked Key Elements, Limiting Their Usefulness As previously mentioned, in May 2007, OMB issued a memorandum requiring agencies to develop plans to eliminate the unnecessary collection and use of SSNs, an objective that was to be accomplished within 18 months. OMB did not set requirements for agencies on creating effective plans to eliminate the unnecessary collection and use of SSNs. However, other federal laws and guidance have established key elements that performance plans generally should contain, including: Performance goals and indicators: Plans should include tangible and measurable goals against which actual achievement can be compared. Performance indicators should be defined to measure outcomes achieved versus goals. Measurable activities: Plans should define discrete events, major deliverables, or phases of work that are to be completed toward the plan’s goals. Timelines for completion: Plans should include a timeline for each goal to be completed that can be used to gauge program performance. Roles and responsibilities: Plans should include a description of the roles and responsibilities of agency officials responsible for the achievement of each performance goal. Our preliminary results show that the majority of plans that the 24 CFO Act agencies originally submitted to OMB in response to its guidance lacked key elements of effective performance plans. For example, only two agencies (the Departments of Commerce and Education) developed plans that addressed all four key elements. Four agencies’ plans did not fully address any of the key elements, 9 plans addressed one or two of the elements, and the remaining 9 plans addressed three of the elements. Agency officials stated that, because OMB did not set a specific requirement that SSN reduction plans contain clearly defined performance goals and indicators, measurable activities, timelines for completion, or roles and responsibilities, the officials were not aware that they should address these elements. Yet, without complete performance plans containing clearly defined performance goals and indicators, measurable activities, timelines for completion, and roles and responsibilities, it is difficult to determine what overall progress agencies have achieved in reducing the unnecessary collection and use of SSNs and the concomitant risk of exposure to identity theft. Continued progress toward reducing that risk is likely to remain difficult to measure until agencies develop and implement effective plans. Not all agencies maintain an up-to-date inventory of their SSN collections Developing a baseline inventory of systems that collect, use, and display SSNs and ensuring that the inventory is periodically updated can assist managers in maintaining an awareness of the extent to which they collect and use SSNs and their progress in eliminating unnecessary collection and use. Standards for Internal Control in the Federal Government state that an accurate inventory provides a detailed description of an agency’s current state and helps to clarify what additional work remains to be done to reach the agency’s goal. Of the 24 CFO Act agencies we reviewed, 22 reported that, at the time that they developed their original SSN reduction plans in fiscal years 2007 and 2008, they compiled an inventory of systems and programs that collected SSNs. However, as of August 2016, 6 of the 24 agencies did not have up-to-date inventories: 2 agencies that had no inventories initially and 4 agencies that originally developed inventories but subsequently reported that those inventories were no longer up-to-date. These agencies did not have up-to-date inventories, in part, because OMB M-07-16 did not require agencies to develop an inventory or to update the inventory periodically to measure the reduction of SSN collection and use. However, OMB has issued separate guidance that requires agencies to maintain an inventory of systems that “create, collect, use, process, store, maintain, disseminate, disclose, or dispose of PII.” This guidance states that agencies are to maintain such an inventory, in part, to allow them to reduce PII to the minimum necessary. Without enhancing these inventories to indicate which systems contain SSNs and using them to monitor their SSN reduction efforts, agencies will likely find it difficult to measure their progress in eliminating the unnecessary collection and use of SSNs. Agency definitions of “unnecessary” collection and use have been inconsistent Achieving consistent results from any management initiative can be difficult when the objectives are not clearly defined. Standards for Internal Control in the Federal Government state that management should define objectives in measurable terms so that performance toward achieving those objectives can be assessed. Further, measurable objectives should generally be free of bias and not require subjective judgments to dominate their measurement. In our draft report, we noted that of the 24 CFO Act agencies, 4 reported that they had no definition of “unnecessary collection and use” of SSNs. Of the other 20 agencies, 8 reported that their definitions were not documented. Officials from many agencies stated that the process of reviewing and identifying unnecessary uses of SSNs was an informal process that relied on subjective judgments. These agencies did not have consistent definitions of the “unnecessary collection and use” of SSNs, in part, because OMB M-07-16 did not provide clear criteria for determining what would be an unnecessary collection or use of SSNs, leaving agencies to develop their own interpretations. Given the varying approaches that agencies have taken to determine whether proposed or actual collections and uses of SSNs are necessary, it is doubtful that the goal of eliminating unnecessary collection and use of SSNs is being implemented consistently across the federal government. Until guidance for agencies is developed in the form of criteria for making decisions about what types of collections and uses of SSNs are unnecessary, agency efforts to reduce the unnecessary use of SSNs likely will continue to vary, and, as a result, the risk of unnecessarily exposing SSNs to identity theft may not be thoroughly mitigated. Agencies have not always submitted up-to-date status reports, and OMB has not set performance measures to monitor agency efforts In its Fiscal Year 2008 Report to Congress on Implementation of the Federal Information Security Management Act of 2002, OMB recognized that agencies’ SSN reduction plans needed to be monitored. OMB reported that the reduction plans that agencies submitted for fiscal year 2008 displayed varying levels of detail and comprehensiveness and stated that agency reduction efforts would require ongoing oversight. Subsequently, OMB required agencies to report on the progress of their SSN reduction efforts through their annual FISMA reports. However, preliminary findings in our draft report show that annual updates submitted by the 24 CFO Act agencies as part of their FISMA reports from fiscal year 2013 through fiscal year 2015 did not always include updated information about specific agency efforts and results achieved, making it difficult to determine the status of activities that had been undertaken. Further, the annual updates did not include performance metrics. OMB did not establish specific performance metrics to monitor implementation of planned reduction efforts. Its guidance asked agencies to submit their most current documentation on their plans and progress, but it did not establish performance metrics or ask for updates on achieving the performances metrics or targets that agencies had defined in their plans. Although in 2016, OMB began requesting additional status information related to agency SSN reduction programs, it did not establish metrics for measuring agency progress in reducing the unnecessary collection and use of SSNs. Without performance metrics, it will remain difficult for OMB to determine whether agencies have achieved their goals in eliminating the unnecessary collection and use of SSNs or whether corrective actions are needed. In conclusion, based on preliminary information from our study of federal SSN reduction efforts, the initiatives that the 24 CFO Act agencies have undertaken show that it is possible to identify and eliminate the unnecessary use and display of SSNs. However, it is difficult to determine what overall progress has been made in achieving this goal across the government. Not all agencies developed effective SSN reduction plans, maintained up-to-date inventories of their SSN collection and use, or applied consistent definitions of “unnecessary” collection, use, and display of SSNs. Further, agencies have not always submitted up-to-date status reports to OMB, and OMB has not established performance measures to monitor agency efforts. Until OMB and agencies adopt better and more consistent practices for managing their SSN reduction processes, overall governmentwide reduction efforts will likely remain limited and difficult to measure; moreover, the risk of SSNs being exposed and used to commit identity theft will remain greater than it need be. Accordingly, our draft report contains five recommendations to OMB to improve the consistency and effectiveness of governmentwide efforts to reduce the unnecessary use of SSNs and thereby mitigate the risk of identity theft. Specifically, the report recommends that OMB: specify elements that agency plans for reducing the unnecessary collection, use, and display of SSNs should contain and require all agencies to develop and maintain complete plans; require agencies to modify their inventories of systems containing PII to indicate which systems contain SSNs and use the inventories to monitor their reduction of unnecessary collection and use of SSNs; provide criteria to agencies on how to determine unnecessary use of SSNs to facilitate consistent application across the federal government; take steps to ensure that agencies provide up-to-date status reports on their progress in eliminating unnecessary SSN collection, use, and display in their annual FISMA reports; and establish performance measures to monitor agency progress in consistently and effectively implementing planned reduction efforts. Chairman Johnson and Hurd, Ranking Members Larson and Kelly, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you have any questions regarding this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are John A. de Ferrari (assistant director), Marisol Cruz, Quintin Dorsey, David Plocher, Priscilla Smith, and Shaunyce Wallace. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | SSNs are key pieces of identifying information that potentially may be used to perpetrate identity theft. Thieves find SSNs valuable because they are the identifying link that can connect an individual's information across many agencies, systems, and databases. This statement summarize GAO's draft report that: (1) describes what governmentwide initiatives have been undertaken to assist agencies in eliminating their unnecessary use of SSNs and (2) assesses the extent to which agencies have developed and executed plans to eliminate the unnecessary use and display of SSNs and have identified challenges associated with those efforts. For the draft report on which this testimony is based, GAO analyzed documentation, administered a questionnaire, and interviewed officials from the 24 CFO Act agencies that led or participated in SSN elimination efforts. In its draft report, GAO noted that several governmentwide initiatives aimed at eliminating the unnecessary collection, use, and display of Social Security numbers (SSN) have been underway in response to recommendations that the presidentially appointed Identity Theft Task Force made in 2007 to the Office of Personnel Management (OPM), the Office of Management and Budget (OMB), and the Social Security Administration (SSA). However, these initiatives have had limited success. In 2008, OPM proposed a new regulation requiring the use of an alternate federal employee identifier but withdrew its proposed regulation because no such identifier was available. OMB required agencies to develop SSN reduction plans and continues to require annual reporting on SSN reduction efforts. SSA developed an online clearinghouse of best practices associated with the reduction of SSN use; however, the clearinghouse is no longer available online. All 24 agencies covered by the Chief Financial Officers (CFO) Act developed SSN reduction plans and reported taking actions to curtail the use and display of the numbers. Nevertheless, in their responses to GAO's questionnaire and follow-up discussions, the agencies cited impediments to further reductions, including (1) statutes and regulations mandating the collection of SSNs, (2) the use of SSNs in necessary interactions with other federal entities, and (3) technological constraints of agency systems and processes. Further, poor planning by agencies and ineffective monitoring by OMB have limited efforts to reduce SSN use. Lacking direction from OMB, many agencies' reduction plans did not include key elements, such as time frames and performance indicators, calling into question their utility. In addition, OMB has not required agencies to maintain up-to-date inventories of their SSN holdings or provided criteria for determining “unnecessary use and display,” limiting agencies' ability to gauge progress. In addition, OMB has not ensured that agencies update their annual progress nor has it established performance metrics to monitor agency efforts to reduce SSN use. Until OMB adopts more effective practices for guiding agency SSN reduction efforts, overall governmentwide reduction will likely remain limited and difficult to measure, and the risk of SSNs being exposed and used to commit identity theft will remain greater than it need be. GAO's draft report contains five recommendations to OMB to require agencies to submit complete plans for ongoing reductions in the collection, use, and display of SSNs; require inventories of systems containing SSNs; provide criteria for determining “unnecessary” use and display of SSNs; ensure agencies update their progress in reducing the collection, use, and display of the numbers in annual reports; and monitor agency progress based on clearly defined performance measures. |
The Early Detection Program is implemented through cooperative agreements between the CDC and 68 grantees—health departments in the 50 states, the District of Columbia, and the 5 U.S. territories, as well as 12 American Indian/Alaska Native tribal organizations. The program funds breast and cervical cancer screening services for women who are uninsured or underinsured, have an income equal to or less than 250 percent of the federal poverty level (FPL), and are aged 40 through 64 for breast cancer screenings or aged 18 through 64 for cervical cancer screenings. Within these eligibility criteria, CDC prioritizes certain groups for screening and individual program grantees may target certain groups or broaden eligibility. Breast cancer screening consists of clinical breast exams and mammograms. Cervical cancer screening consists of pelvic exams and the Pap test. While screening services represent the core of the Early Detection Program, program providers must also provide diagnostic testing and follow-up services for women whose screening tests are abnormal. The CDC funds cannot be used to pay for treatment; however, for women diagnosed with breast or cervical cancer, program providers must provide referrals for appropriate treatment services and case management services, if determined necessary. The Early Detection Program, which was reauthorized by Congress in 2007, is funded through annual appropriations to the CDC. According to CDC officials, in fiscal year 2008, total funding for the program was approximately $182 million. To implement the program, the CDC solicits applications to select Early Detection Program grantees every 5 years. All grantees must submit an annual request for funding to CDC. According to CDC officials, annual budgets are awarded based on performance and other factors. By law, grantees must match every $3 in federal contribution with at least $1 in non-federal contribution. Grantee matching funds may support the screening or non-screening components of the program. At least 60 percent of the awarded funds must be used for direct clinical services; the remainder may be used for other program functions including program management, education, outreach, quality assurance, surveillance, data management, and evaluation. Some grantees have also acquired additional state or local resources for their programs. Early Detection Program grantees typically have a network of local providers such as community health centers and private providers that deliver the screening and diagnostic services to women. Under the Treatment Act states may extend Medicaid eligibility to women who are under age 65, uninsured, otherwise not eligible for Medicaid, and who have been (1) screened under the CDC-funded Early Detection Program and (2) found to be in need of treatment for breast or cervical cancer including precancerous conditions. All 51 states chose to implement this optional Medicaid eligibility category. In doing so they were required to provide full Medicaid coverage to eligible women screened under the Early Detection Program and found in need of treatment for breast or cervical cancer. States must provide Medicaid coverage for the period when the woman needs treatment for breast or cervical cancer. In guidance provided to states, CMS and CDC define “screened under the program” as, at a minimum, offering Medicaid eligibility to women whose clinical services under the Early Detection Program were provided all or in part with CDC funds. Accordingly, CDC officials stated that any state offering Medicaid coverage under the Treatment Act would be required, at a minimum, to offer coverage to women screened with CDC funds, provided the women met all other eligibility requirements. The guidance also allows states to use a broader definition of “screened under the program,” which includes extending Medicaid eligibility to (1) women screened by a CDC-funded provider within the scope of the state’s Early Detection Program, even if CDC funds did not pay for the particular service, or (2) women screened by a non– CDC-funded provider whom the state has elected to include as part of its Early Detection Program. The CDC’s Early Detection Program screened about half a million or more women for breast and cervical cancer annually from 2002 through 2006. In 2006, the program screened 579,665 women. There were 331,672 women screened with mammography and 4,026 breast cancers detected. There were 350,202 women screened with a Pap test and 5,110 cervical cancers and precursor lesions detected. Almost half of all women screened by the Early Detection Program in 2006 were screened by grantees in 10 states. (See app. II for information by grantee.) A number of factors determined how many women were screened by a grantee, including the CDC funding awarded, the availability of other resources, and clinical costs (for example, the use of more costly screening technologies such as digital mammography). Over the 5-year period from 2002 through 2006, the Early Detection Program screened 1.8 million low-income, uninsured women. About 1.1 million women were screened for breast cancer, and 18,937 breast cancers were detected. Similarly, about 1.1 million women were screened for cervical cancer, and 22,377 cervical cancers and precursor lesions were detected. The age and race of women screened reflect the Early Detection Program’s policies that prioritize breast cancer screening for women 50 to 64 years old and cervical cancer screening for women 40 to 64 years old. Thus, women who received a mammogram tended to be older, with 71 percent age 50 or older. Women who received a Pap test tended to be younger, with 55 percent under age 50. (See fig. 1.) The program also targets racial and ethnic minorities, who tend to have lower screening rates for breast and cervical cancer, so more than half the women screened were racial or ethnic minorities. (See fig. 2.) Most states extend Medicaid eligibility under the Treatment Act to more women than is minimally required—those whose screening or diagnostic services were paid for with CDC funds. As of October 2008, 17 states reported applying only this minimum definition in determining Medicaid eligibility under the Treatment Act. Of the states that extend eligibility, 15 states extend Medicaid eligibility to women served by a CDC-funded provider, whether or not CDC funds were used to pay for services. The remaining 19 states further extend eligibility to women who were screened and diagnosed by non-CDC-funded providers. (See fig. 5.) Seventeen states offer Medicaid eligibility only to women screened or diagnosed with CDC funds. Fifteen of these states require a woman to have received at least one CDC-funded screening or diagnostic service to be considered “screened under the program.” Two states, Florida and the District of Columbia, require that both the screening and diagnostic services be paid for with CDC funds for women to be eligible for Medicaid. Fifteen states extend Medicaid eligibility to women screened or diagnosed by a CDC-funded provider. In these states, women whose services were paid for with state or other funds, but delivered by a provider receiving some CDC grant funds, are considered eligible for Medicaid if they need treatment. This allows states that fund their Early Detection Programs above the contribution required to receive the CDC grant to extend eligibility to women screened by a program provider but with other funds. Nineteen states further extend Medicaid eligibility to women screened or diagnosed by a non-CDC-funded provider. Some of these states designate specific providers. For example, Iowa extends eligibility to women whose services were provided by Komen-funded providers. Other states consider women eligible for Medicaid under the Treatment Act if they were screened by any qualified provider. Among the states that limit Medicaid eligibility to women served only with CDC funds (17 states) or that extend eligibility to women served by a CDC- funded provider (15 states), some have alternate pathways to Medicaid eligibility for women initially screened or screened and diagnosed outside the Early Detection Program. In most of these states, women initially screened outside the program can qualify for Medicaid if they later receive their diagnostic services with CDC funds. Only four states reported they do not allow women who have been screened outside the program to receive diagnostic services under the program to qualify for Medicaid. In most of the states that limit Medicaid eligibility to women served with CDC funds or that extend eligibility to women served by a CDC-funded provider, once a woman who received her screening and diagnostic services outside the Early Detection Program is diagnosed with cancer, she cannot access Medicaid coverage under the Treatment Act. However, Early Detection Program directors in 6 of these states reported that women diagnosed outside the program can be rescreened under the program to qualify for Medicaid, and in 11 states women can qualify for Medicaid by receiving additional diagnostic services from a program provider. Although rescreening or providing additional diagnostic services is inefficient and may be medically unnecessary, program rules in some states require a woman to have received at least one CDC-funded service to qualify for Medicaid. Whether a woman can access Medicaid through one of these alternate pathways depends on her obtaining a referral and on the availability of funds and providers to deliver the additional screening and diagnostic services. In implementing the Treatment Act, most states reported they require a confirmed diagnosis of breast cancer, cervical cancer, or precancerous lesions to meet the requirement that women be in need of cancer treatment services. Two states, Missouri and New Hampshire, indicated that a woman may be enrolled in Medicaid in order to receive certain diagnostic procedures, such as a biopsy or magnetic resonance imaging. A third state, Oklahoma, indicated that an abnormal screening test alone met the standard of needing treatment and qualified a woman for Medicaid coverage. In Oklahoma, women with an abnormal mammogram or Pap test are enrolled in Medicaid for their diagnostic services, and Medicaid coverage ends if they are found to not have a cancer diagnosis. As of October 2008, 20 states had adopted presumptive eligibility—an option allowed by the Treatment Act—to help women get treatment sooner by provisionally enrolling them in Medicaid while their full application is being processed. Among the states that do not have presumptive eligibility, Early Detection Program directors reported that the average length of time it takes a woman to be enrolled once their application has been submitted did not exceed 30 days, with an overall state average of 9 days. In most states, whether or not they have adopted presumptive eligibility, a separate visit to the Medicaid office is not required for a woman to be enrolled in Medicaid under the Treatment Act. Early Detection Program staff receive application materials and then forward applications to the Medicaid agency for approval. Medicaid enrollment under the Treatment Act varied widely in 2006, ranging from fewer than 100 women in each of South Dakota, Delaware, and Hawaii to more than 9,300 women in California. (See table 1.) Enrollment was concentrated in a few states, with California, Oklahoma, and Georgia accounting for more than half of all Treatment Act enrollees in 2006. However, Treatment Act enrollees are a small share of Medicaid enrollees overall—less than 0.5 percent—with a median enrollment of 395 across 39 states reporting data for 2006. Enrollment may be affected by state policies and practices for initial and ongoing eligibility under the Treatment Act. In general, states with the highest enrollment and highest enrollment as a share of population adopted the broadest definition of “screened under the program” by extending Medicaid eligibility to women served by non-CDC funded providers. In 2006, median enrollment was 639 in these states, or an average of 124 enrollees per 100,000 women 40 to 64 years old. In contrast, median enrollment was 265 in states that limit eligibility to women served with CDC funds or by a CDC-funded provider. In these states an average of 44 women were enrolled for every 100,000 women 40 to 64 years old. Medicaid enrollment of women covered under the Treatment Act has grown in most states. Seven states experienced growth greater than 70 percent, while one state reported a significant decline from 2004 to 2006. (See app. III.) From 2004 to 2006, the median rate of enrollment growth was 40 percent among the 35 states reporting data for both years. States that shifted to broader definitions of “screened under the program” generally experienced higher than average growth. Among states that initially applied the minimum definition of screened under the program, but later broadened eligibility to include women screened by non-CDC- funded providers, enrollment growth averaged 67 percent from 2004 to 2006. For example, in 2004 South Carolina limited Medicaid eligibility to women served with CDC funds, but in July 2005 it extended coverage to women served by any qualified provider in the state. Its enrollment grew from 162 women in 2004 to 614 women in 2006. Enrollment in Medicaid under the Treatment Act can also be affected by state policies and practices for periodic redetermination of Medicaid eligibility. Practices for redetermining eligibility can range from a statement by the beneficiary that she continues to need treatment to a verbal or signed statement by the health provider of the beneficiary’s treatment status. For example, in West Virginia, Medicaid enrollment declined from 709 in 2004 to 247 in 2006 after the state imposed stricter redetermination requirements in 2004. As with enrollment, average per capita Medicaid spending under the Treatment Act also varies widely across states (see fig. 6). Among the 39 states reporting Medicaid enrollment and spending data for 2006, total monthly spending per Treatment Act enrollee averaged $1,067, ranging from $584 in Oklahoma to $2,304 in Colorado. Federal funds accounted for more than two-thirds of this spending. The average monthly state share in per enrollee was $307, ranging from $131 in Oklahoma to $806 in Colorado. Colorado. States receive an enhanced federal matching assistance percentage, which is the amount the federal government reimburses states for expenditures incurred in providing services to women enrolled in Medicaid under the Treatment Act. In 2006, these percentages, for expenditures for women enrolled in Medicaid under the Treatment Act, ranged from 65 percent to 83 percent. Some of the variation in average total spending per Treatment Act enrollee may be accounted for by differences in state Medicaid reimbursement rates and variation in states’ Medicaid benefit packages. It may also be affected by the relative proportion of breast and cervical cancer patients. For example, a 2007 study using state Medicaid claims data from 2003 in Georgia found that spending for breast cancer patients averaged more than twice that for cervical cancer patients. In 2003, annual Medicaid spending was $20,285 for each woman with breast cancer, but $9,845 for each woman with cervical cancer. State eligibility policies and practices can also affect average spending. For example, Oklahoma, the state with the lowest monthly per person spending under the Treatment Act, enrolls women in Medicaid based on the results of an abnormal screening test alone. Thus, according to an Oklahoma official, many women in Oklahoma are enrolled in Medicaid only for diagnostic services and do not subsequently incur costs for cancer treatment. At $584 per month in 2006, average Medicaid spending per Treatment Act enrollee in Oklahoma is the lowest of the 39 states for which we have data. West Virginia has reduced its overall enrollment from 709 in 2004 to 247 in 2006 by taking a proactive approach to disenrolling women if they have completed their cancer treatment, and cannot otherwise qualify for Medicaid. The state requires more than just a woman’s self-certification of her continued need for treatment; case managers actively follow women receiving treatment, and a registered nurse evaluation is required to certify their continued need for treatment and Medicaid eligibility. While total spending in West Virginia declined 50 percent in 2006, average monthly per enrollee spending increased by 19 percent, from $894 to $1,064. Among states that limit Medicaid eligibility under the Treatment Act to women screened with CDC funds or that extend Medicaid eligibility to women screened by a CDC-funded provider, few statewide alternatives to Medicaid coverage for treatment are available to low-income, uninsured women who are screened and diagnosed outside of the Early Detection Program. Early Detection Program directors in four states reported having state-funded programs as an alternative to Medicaid. These programs pay specifically for breast or cervical cancer treatment or more broadly provide health insurance coverage or free or reduced-fee health care. The Maryland Breast and Cervical Cancer Diagnosis and Treatment Program pays specifically for breast and cervical cancer diagnosis and treatment services, according to our survey. Maryland residents who are within 250 percent of the FPL, are uninsured or meet other health insurance criteria, and were screened for breast or cervical cancer by any medical provider, may be eligible for this program. The Delaware Cancer Treatment Program can pay for treatment of breast or cervical cancer, according to our survey. Delaware residents who have been diagnosed with cancer on or after July 1, 2004, have no comprehensive health insurance coverage, and have household incomes less than 650 percent of the FPL may be eligible for free cancer treatment for up to 2 years under this program. The state charity hospital system in Louisiana—which provides free health care services for low-income, uninsured residents below 200 percent of the FPL—can provide free breast and cervical cancer treatment, according to our survey. The hospital system also provides reduced-fee care to individuals with incomes above 200 percent of the FPL. The Healthy Indiana Plan provides health insurance coverage for state residents who are 19 to 64 years old, earn less than 200 percent of the FPL, have been uninsured for the past 6 months, and do not have access to employer-sponsored health insurance coverage, according to our case study. A program official stated that the benefit package was similar to that of Medicaid and included the same provider network. Since the program’s implementation in January 2008, enrollment has been higher than expected, and needed treatment could be delayed because the enrollment process may take 60 to 90 days. Early Detection Program directors, advocacy groups, and providers reported in our survey and case studies that some local resources were available as alternatives to Medicaid to pay for treatment of breast or cervical cancer. These include donated care, funding from local charity organizations, and county assistance. Physicians may donate free health care services to low-income, uninsured individuals. Fourteen states reported through our survey having donated care available as a resource for breast or cervical cancer treatment. For example, Project Access has networks of physicians in Virginia that provide donated care to eligible residents in local areas. Local charity organizations can provide resources to pay for breast or cervical cancer treatment, and 20 states reported through our survey having charity funds available. For example, Anthem Blue Cross Blue Shield and Komen for the Cure affiliates in Indiana provide funding for breast or cervical cancer treatment services for low-income, uninsured women. County indigent funds, public assistance programs, and county hospitals can cover some health care costs for low-income, uninsured individuals in some areas. Eleven states reported having some county indigent funds or other public assistance programs available, according to our survey. In Florida, county hospitals provide breast and cervical cancer screening and diagnostic services, as well as funding for treatment costs, for low-income, uninsured women. However, the availability of these resources varied by locality, and 21 Early Detection Program directors reported as much in our survey. Furthermore, in our case studies, several officials and providers cited concerns over the availability of treatment resources on a local level. For example, an Early Detection Program official in Indiana told us that densely populated areas of the state, such as North Central Indiana and South Bend, had multiple treatment resources, but women living in rural areas had limited access to them. A Komen for the Cure official in Indiana stated there was only 1 county hospital to serve low-income, uninsured residents in a 21-county region. We also spoke with the executive director of a Komen affiliate in Florida who said that some areas of the state, such as West Palm Beach and Tallahassee, had limited treatment resources, while southern areas had more accessible resources. Furthermore, physicians we spoke to in Virginia stated that treatment alternatives vary by location in the state, and some areas have problems with access to care. Although not required, some Early Detection Program staff help women screened outside the program and ineligible for Medicaid under the Treatment Act find local treatment resources, as reported in two of our case study states. Officials said they encouraged these women to contact local or county hospitals or referred them to available local programs. In addition, three Early Detection Program directors surveyed reported having programs that track the treatment process for women screened outside the Early Detection Program. Furthermore, in some states, charity organizations have programs to provide referrals to low-income, uninsured women for local treatment resources. We learned from advocacy group representatives in our case study states that Komen for the Cure and the American Cancer Society operate cancer resource hotlines and health insurance information hotlines women can call for information about local cancer treatment resources. They also fund patient navigators who provide counseling and support services, which include finding local programs for women ineligible for Medicaid under the Treatment Act. The Department of Health and Human Services (HHS) reviewed a draft of this report and provided comments on our findings, which are reprinted in appendix IV. Overall, HHS concurred with our description of the Early Detection Program. HHS indicated that the data we provided on states’ implementation of the Treatment Act, including eligibility options, Medicaid enrollment, and treatment cost data were useful. Finally, HHS noted that the information contained in our report will be used to make improvements to better serve low-income women. HHS also provided technical comments, which we incorporated as appropriate. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to the Secretary of Health and Human Services, the Director of CDC, the Administrator of CMS, appropriate congressional committees, and other interested parties. The report also is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To determine how many eligible women have been screened by the Early Detection Program, we compared the number of women screened by the Early Detection Program with the number of low-income, uninsured women eligible to be screened, including those who were screened by another provider or were not screened by any provider. We analyzed data from the Centers for Disease Control and Prevention’s (CDC) Minimum Data Elements (MDE) to determine the number of women screened by the Early Detection Program. Program grantees report these data to the CDC twice a fiscal year (October and April). MDE data include data for some women whose services were paid for in part with state or other nonfederal funding. We analyzed MDE data for calendar years 2002 through 2006, including information in total and by grantee on the number of women screened by the Early Detection Program—those who had mammograms and Pap tests—and the number of breast cancers and cervical cancers or precursor lesions detected. We also analyzed the age, race, and ethnicity distributions of the women screened. The Early Detection Program has policies and procedures for standardizing and assessing the quality of the MDE data submitted by grantees. We found the data to be sufficiently reliable for our purposes by reviewing these policies and procedures and the results of an MDE data validation study. We then compared the number of women screened by the Early Detection Program to the number of women potentially eligible for screening, which we determined with data collected from the Medical Expenditure Panel Survey (MEPS), administered by the Agency for Healthcare Research and Quality. For our analysis of women receiving mammograms, we pooled MEPS data for 2005 and 2006 because the U.S. Preventive Services Task Force recommends that women receive a mammogram every 1 to 2 years. We identified how many women were 40 to 64 years old—the age group generally eligible for a mammogram by the Early Detection Program—as well as low income and uninsured. We defined low income as at or below 250 percent of the federal poverty level (FPL) because federal guidelines allow the Early Detection Program to pay for services to women whose income is at or below this level. According to MEPS, women are considered uninsured if they indicated for each of the 12 months of the year that they were not covered under any type of health insurance for the entire month. Although underinsured women are eligible for screenings provided by the Early Detection Program, we were not able to identify this population in MEPS. Next, we determined how many of these potentially eligible low-income, uninsured women 40 to 64 years old received a mammogram in 2005 to 2006. We then compared this number with the number of women that the Early Detection Program screened with a mammogram in 2005 to 2006. For our analysis of women receiving Pap tests, we pooled MEPS data for 2004, 2005, and 2006 because the U.S. Preventive Services Task Force recommends that women receive a Pap test at least every 3 years. We identified how many women were 18 to 64 years old—the age group generally eligible for a Pap test by the Early Detection Program—as well as low-income and uninsured, using the above criteria. We determined how many women meeting these criteria received a Pap test in 2004 to 2006. We compared this number with the number of women that the Early Detection Program screened with a Pap test in 2004 to 2006. In our analyses of women receiving mammograms and Pap tests, we did not examine why women did not receive either of these screening tests, because it was beyond the scope of this report. We determined that the MEPS data were sufficiently reliable for our purposes by speaking with knowledgeable agency officials at the Agency for Healthcare Research and Quality, reviewing related documentation, and comparing our results with CDC and U.S. Census data. To determine how states have implemented the Treatment Act, we conducted a Web-based survey of Early Detection Program directors in the 51 states. We reviewed federal guidelines for implementing the Treatment Act, and interviewed Early Detection Program directors and other officials in selected states to gather information to design the survey questions. We reviewed previous studies of the Treatment Act conducted by George Washington University in 2004 under contract with the CDC and by Susan G. Komen for the Cure (Komen for the Cure) in 2007. We determined that the Early Detection Program directors were knowledgeable about their states’ Medicaid eligibility policies and practices for the Treatment Act based on this review and discussions with CDC and Centers for Medicare and Medicaid Services (CMS) officials. The survey included both closed-ended and open-ended questions on characteristics of the Early Detection Program, implementation of the Treatment Act, Medicaid eligibility criteria, and the Medicaid enrollment process. We pretested the survey at CDC’s national meeting of Early Detection Program directors in Atlanta, Georgia, on September 9, 2008. The survey was fielded during October 2008, and we obtained a 100 percent response rate from all 50 states and the District of Columbia. Survey responses were edited for logic and appropriate skip patterns. We reviewed survey responses for outliers and followed up with officials in selected states to verify the accuracy of responses. To determine the number of women enrolled in state Medicaid programs under the Treatment Act and average state spending for this coverage, we analyzed enrollment and spending data from CMS’s Medicaid Statistical Information System (MSIS) as presented in the MSIS State Summary Datamart. The MSIS contains state-submitted Medicaid enrollment and claims data, including each person’s basis of eligibility, use of services, basic demographic characteristics, and payments made to providers. We used MSIS data on the number of women enrolled in Medicaid with the Treatment Act as their basis of eligibility by state for fiscal years 2004 and 2006. We then calculated the average per person monthly spending by state for fiscal year 2006 using MSIS data on total spending for Medicaid enrollees under the Treatment Act and the total number of months of eligibility accounted for by all enrollees during the year. Our analysis was limited to 38 states for 2004 and 39 states for 2006 because MSIS data on enrollment and spending were not available for all states or for all years. According to CMS, data from the remaining states either were not reported separately for Treatment Act eligibility or had not yet passed CMS’s data quality control process. In addition, we could not separately determine both the number of women enrolled in Medicaid and Medicaid costs for women by diagnosis (breast cancer, cervical cancer, or precancerous conditions) because enrollment data reported in the MSIS State Summary Datamart are not broken down by diagnostic category. We worked with CMS officials to establish the reliability of the data used in our analysis. States submit their MSIS data quarterly to CMS. The data are submitted to a system of quality control edit checks. Data files that exceed prescribed error tolerance limits are rejected and must be resubmitted by states until they are determined acceptable by CMS. Following the quality review process, data are then posted to CMS’s public Web site. We also reviewed MSIS documentation including user manuals, design specifications, a data dictionary, and known MSIS data anomalies. We also interviewed knowledgeable CMS officials and followed up with states whose reported enrollment and per capita spending data appeared as outliers when we arrayed the data for all states. We determined that the data were sufficiently reliable for our purposes based on our review. To identify alternatives available to low-income, uninsured women who need treatment for breast or cervical cancer, but who are not covered under the Treatment Act, we obtained general information from our Web- based survey of Early Detection Program directors (described above). We targeted the relevant survey questions to states that limited Medicaid eligibility under the Treatment Act to women screened or diagnosed with CDC funds or that extend Medicaid eligibility to women screened by a CDC-funded provider. Our findings were limited by responses to a narrowly-worded survey question on statewide programs for breast and cervical cancer diagnosis and treatment and may not necessarily account for all available statewide or state-funded programs. We also conducted case studies of three states that limited Medicaid eligibility under the Treatment Act to women screened or diagnosed with CDC funds only: Florida, Indiana, and Virginia. We chose these states because their rate of screening eligible women was lower than the national average. In each state, we interviewed: Early Detection Program directors and other officials; representatives from Komen for the Cure, American Cancer Society local chapters, and other state or local organizations; and health care providers. We developed a protocol for each interview with semistructured interview questions and obtained detailed examples of available alternatives to Medicaid under the Treatment Act. Our findings are illustrative examples and thus are not generalizable, because the officials we surveyed and interviewed may not have had complete knowledge of all available local resources, and because available resources may vary by state. We conducted our work from May 2008 to May 2009 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions. In addition to the contact named above, Jennifer Grover, Assistant Director; Anne Dievler; Eric Anderson; Seta Hovagimian; Dan Ries; Hemi Tewarson; Timothy J. Walker; and Suzanne Worth made key contributions to this report. | Tens of thousands of women die each year from breast or cervical cancer. While screening and early detection through mammograms and Pap tests--followed by treatment--can improve survival, low-income, uninsured women are often not screened. In 1990, Congress authorized the Centers for Disease Control and Prevention (CDC) to fund screening and diagnostic services for such women, which led CDC to establish the National Breast and Cervical Cancer Early Detection Program. The Breast and Cervical Cancer Prevention and Treatment Act of 2000 was also enacted to allow states to extend Medicaid eligibility to women screened under the Early Detection Program and who need breast or cervical cancer treatment. Screened under the program is defined, at a minimum, as screening paid for with CDC funds. GAO examined the Early Detection Program's screening of eligible women, states' implementation of the Treatment Act, Medicaid enrollment and spending under the Treatment Act, and alternatives available to women ineligible for Medicaid under the Treatment Act. To do this, GAO compared CDC data on women screened by the Early Detection Program from 2002 to 2006 with federal estimates of the eligible population, surveyed program directors on the 51 states' (including the District of Columbia) implementation of the Treatment Act, analyzed Medicaid enrollment and spending data, and conducted case studies in selected states. The CDC's Early Detection Program providers screen more than half a million low-income, uninsured women a year for breast and cervical cancer, but many eligible women are screened by other providers or not screened at all. Comparing CDC screening data with federal estimates of low-income, uninsured women, GAO estimated that from 2005 through 2006, 15 percent of eligible women received a mammogram from the Early Detection Program, while 26 percent were screened by other providers and 60 percent were not screened. For Pap tests, GAO estimated that from 2004 through 2006, 9 percent were screened by the program, 59 percent by other providers, and 33 percent were not screened. Most states extend Medicaid eligibility under the Treatment Act to more women than is minimally required. As of October 2008, 17 states met the minimum requirement to offer Medicaid eligibility to women whose screening or diagnostic services were paid for with CDC funds; 15 extended eligibility to women screened or diagnosed by a CDC-funded provider, whether CDC funds paid specifically for these services or not; and 19 states further extended eligibility to women who were screened or diagnosed by a non-CDC-funded provider. In most of the states that offer Medicaid eligibility only to women served with CDC funds or by a CDC-funded provider, if a woman is screened and diagnosed with cancer outside the Early Detection Program, she cannot access Medicaid coverage under the Treatment Act. Medicaid enrollment and average spending under the Treatment Act vary across states. In 2006, state enrollment ranged from fewer than 100 women to more than 9,300. Median enrollment was 395 among the 39 states reporting data, with most experiencing enrollment growth from 2004 to 2006. Among the 39 states, average monthly spending per enrollee was $1,067, ranging from $584 to $2,304. Spending may vary due to several factors, including differences in state eligibility policies and practices and Medicaid benefit plan design. Few statewide alternatives to Medicaid coverage are available to low-income, uninsured women who need breast or cervical cancer treatment but are ineligible for Medicaid under the Treatment Act. Early Detection Program directors in only four of the states with more limited eligibility standards reported having a statewide program that pays for cancer treatment or provides broader health insurance or free or reduced-fee care. And while several sources identified possible local resources as alternatives--donated care, funding from local charity organizations, and county assistance--the availability and applicability of these resources varies by area. For example, an Early Detection Program official in Indiana told us that densely populated areas of the state had multiple treatment resources, but women living in rural areas had limited access to them. |
Basic training is the initial training provided to military recruits upon entering service into one of the military services. While the program and length of instruction varies somewhat among the services, the intent of the training is to transform male and female recruits from civilians into military service members. Basic training typically consists of physical conditioning; learning the military service’s core values, history and tradition; weapons qualification; instilling discipline; and nuclear, biological, and chemical protection training along with other training needed for initial entry into the services. The training varies in length—typically 6.4 weeks in the Air Force, 9 weeks in the Army and Navy, and 12 weeks in the Marine Corps. Following completion of basic training, recruits attend advanced individual training to further enhance skills in particular areas of interest (military occupational specialties). Upon arriving at a basic training location, recruits are processed and are generally housed for several days in reception barracks pending their assignment to a training unit and their primary barracks for the duration of the basic training period. For the most part, the housing accommodations within existing barracks are typically the same, regardless of male or female occupancy. DOD standards dictate space requirements of 72 square feet of living space per recruit, but the actual space provided is often less than that for the services, particularly during the summer months when a surge of incoming recruits usually occurs. In the Navy and Air Force, male and female recruits are housed on different floors in the buildings. In the Army, Fort Jackson and Fort Leonard Wood are the only locations where both male and female recruits undergo basic training, and they are housed separately in the same buildings, sometimes on the same floor. In the Marine Corps, all female recruits receive basic training at Parris Island, and they are housed in separate barracks. While the barracks across the services differ in design, capacity, and age, it is common for the barracks to have 2 or 3 floors with central bathing areas and several “open bays” housing from 50 to 88 recruits each in bunk beds. Some of the barracks, such as the Army’s “starships” and the Air Force barracks, are large facilities that house over 1,000 recruits. Others, especially those constructed in the 1950s and early 1960s, are smaller with recruit capacities of about 240 or less. Table 1 provides an overall summary of the number and age of the military services’ recruit barracks, along with the number of recruits trained in fiscal year 2001. As shown in the table, the Army has the largest number of barracks—over 60 percent of the total across the services—and trains nearly one-half of the recruits entering the military. The Army also uses temporary barracks, referred to as “relocatables,” to accommodate recruits at locations where capacity is an issue. Figure 1 depicts an exterior view of recruit barracks at Lackland Air Force Base, Texas, an “open bay” living space at the Marine Corps Recruit Depot at Parris Island, South Carolina, and an Army temporary (relocatable) barracks at Fort Sill, Oklahoma. Until recently, DOD had no readiness reporting system in place for its defense installations and facilities. In fiscal year 2000, DOD reported to the Congress for the first time on installation readiness as an integral element of its overall Defense Readiness Reporting System. At the core of the system is a rating classification, typically referred to as a “C” rating. The C- rating process is intended to provide an overall assessment that considers condition and capacity for each of nine facility classes (e.g., “operations and training,” and “community and housing”) on a military installation. Recruit training barracks fall within the community-and-housing facility class. The definitions for the C-ratings are as follows: C-1—only minor facility deficiencies with negligible impact on capability to perform missions; C-2—some deficiencies with limited impact on capability to perform C-3—significant facility deficiencies that prevent performing some C-4—major facility deficiencies that preclude satisfactory mission accomplishment. Each service has the latitude to develop its own processes in establishing C-ratings for its facilities. The services’ systems for assessing the condition of facilities are: the Army’s Installation Status Report; the Air Force’s Installations’ Readiness Report; the Navy’s Installation Readiness Reporting System; and the Marine Corps’ Commanding Officer’s Readiness Reporting System. These systems generally provide aggregate assessments of the physical condition of facilities based on periodic facility inspections. The Department subsequently aggregates the services’ reports and submits an overall assessment for each facility class to the Congress in the Department’s Quarterly Readiness Report. The majority of the services’ basic training installations had given their recruit barracks a C-3 rating, indicating they have significant deficiencies. Despite the acceptable outward appearance and generally good condition of most barracks’ exteriors, our visits to the training locations confirmed that most barracks had significant (C-3) or major (C-4) deficiencies requiring repair or facility replacement. Our site visits confirmed the existence of significant deficiencies, but we also noted some apparent inconsistencies in service ratings of their facilities’ condition. Conditions varied by location. Among barracks in poor conditions, we observed a number of typical heating and air conditioning, ventilation, and plumbing- related deficiencies that formed the basis of the services’ ratings for their barracks. Base officials told us that, although these deficiencies had an adverse impact on the quality of life for recruits and were a burden on trainers, they were able to accomplish their overall training mission. At the same time, we noted recent improvements had been made to some recruit barracks at various locations. We observed that, overall, the services’ recruit training barracks had significant or major deficiencies, but that conditions of individual barracks vary by location. In general, we observed that the Army’s, Navy’s, and Marine Corps’ Parris Island barracks were in the worst physical condition. Table 2 shows the services’ overall rating assessments for the recruit barracks by specific location and the typical deficiencies in those barracks that form the basis of the ratings. With the exception of Parris Island, all locations reported either C-3 or C-4 ratings for their barracks. These ratings are relatively consistent with the ratings of other facilities within the DOD inventory. Recent defense data show that nearly 70 percent of all DOD facilities are rated C-3 or C-4. Further, as shown in appendix 2, the C-ratings for recruit training barracks are not materially different from the ratings of other facilities at the training locations we visited. The C-ratings depicted in table 2 show the overall condition of the recruit barracks at a specific location, but the condition of any one building within a service and at a specific location could differ from the overall rating. The Army, with the greatest number of barracks, had the most problems. For the most part, the Army’s barracks were in overall poor condition across its training locations, but some, such as a recently renovated barracks at Fort Jackson and a newly constructed reception barracks at Fort Leonard Wood, were in better condition. Similarly, the Navy barracks, with the exception of a newly constructed reception barracks in 2001, were in a similar degraded condition because the Navy, having decided to replace all of its barracks, had limited its maintenance expenditures on these facilities in recent years. Of the Marine Corps locations, Parris Island had many barracks in poor condition, the exception being a recently constructed female barracks. The barracks at San Diego and Camp Pendleton were generally in much better shape. The Air Force’s barracks, particularly five of eight barracks that had recently been renovated, were in generally better condition than the barracks at most locations we visited. Our visits to the basic training locations confirmed that most of the barracks had significant or major deficiencies, but we found some apparent inconsistencies in the application of C-ratings to describe the condition of the barracks. For example, as a group, the barracks at the Marine Corps Recruit Depot, Parris Island, were the highest rated—C2—among all the services’ training barracks. The various conditions we observed, however, suggested that they were among the barracks with the worst physical condition we had seen. Marine Corps officials acknowledged that, although they had completed a recent inspection of the barracks and had identified significant deficiencies, the updated data had not yet been entered into the ratings database. As a result, the rating was based on outdated data. On the other hand, the barracks at the Marine Corps Recruit Depot, San Diego, were rated C-3, primarily due to noise from the San Diego airport that is next to the depot. Otherwise, our observations indicated that these barracks appeared to be in much better physical condition than those at Parris Island because they were renovating the San Diego barracks. After we completed our work, the Marine Corps revised its Parris Island and San Diego barracks’ ratings to C-4 and C-2, respectively, in its fiscal year 2002 report. The Air Force barracks were rated C-3, but we observed them to be among those barracks in better physical condition and in significantly better condition than the Army barracks that were rated C- 3. And the Navy’s C-4 rating for its barracks was borne out by our visits. Similar to the Marine Corps Parris Island and the Army barracks, we found in general that the Navy barracks were in the worst physical condition. In our discussions with service officials, we learned that the services use different methodologies to arrive at their C-ratings. For example, except the Army, the services use engineers to periodically inspect facility condition and identify needed repair projects. The Army uses building occupants to perform its inspections using a standard inspection form. Further, except the Army, the services consider the magnitude of needed repair costs for the barracks at the training locations in determining the facilities’ C-ratings. While these methodological differences may produce inconsistencies in C-ratings across the services, we did not specifically review the impact the differences may have on the ratings in this assignment. Instead, we are continuing to examine consistency issues regarding service-wide facility-condition ratings as part of our broader ongoing work on the physical condition and maintenance of all DOD facilities. Our visits to all 10 locations where the military services conduct basic training confirm that most barracks have many of the same types of deficiencies that are shown in table 2. The most prevalent problems included a lack of or inadequate heating and air conditioning, inadequate ventilation (particularly in bathing areas), and plumbing-related deficiencies. Inadequate heating or air conditioning in recruit barracks was a common problem at most locations. The Navy’s barracks at Great Lakes, for example, had no air conditioning, and base officials told us that it becomes very uncomfortable at times, especially in the summer months when the barracks are filled with recruits who have just returned from training exercises. During our visit, the temperature inside several of the barracks we toured ran above 90 degrees with little or no air circulation. Base officials also told us that the excessive heat created an uncomfortable sleeping situation for the recruits. At the Marine Corps Recruit Depot at Parris Island, several barracks that had been previously retrofitted to include air conditioning had continual cooling problems because of improperly sized equipment and ductwork. Further, we were told by base officials that a high incidence of respiratory problems affected recruits housed in these barracks (as well as in some barracks at other locations), and the officials suspected mold spores and other contaminants arising from the filtration system and ductwork as a primary cause. At the time of our visit, the Marine Corps was investigating the health implications arising from the air-conditioning system. And, during our tour of a barracks at Fort Sill, Army personnel told us that the air conditioning had been inoperable in one wing of the building for about 2 years. Inadequate ventilation in recruit barracks, especially in central bathing areas that were often subject to overcrowding and heavy use, was another common problem across the services. Many of the central baths in the barracks either had no exhaust fans or had undersized units that were inadequate to expel moisture arising from shower use. As a result, mildew formation and damage to the bath ceilings, as shown in figure 2, were common. In barracks that had undergone renovation, however, additional ventilation had been installed to alleviate the problems. Plumbing deficiencies were also a common problem in the barracks across the services. Base officials told us that plumbing problems—including broken and clogged toilets and urinals, inoperable showers, pipe leaks, and slow or clogged drainpipes and sinks—were recurring problems that often awaited repairs due to maintenance-funding shortages. As shown in figures 3 and 4, we observed leaking drainpipes and broken or clogged bath fixtures in many of the barracks we visited. In regard to the broken fixtures, training officials told us that the problems had exacerbated an undesirable situation that already existed in the barracks—a shortage of fixtures and showers to adequately accommodate the demands of recruit training. These officials told us that because of the inadequate bath facilities for the high number of recruits, they often had to perform “workarounds”—such as establishing time limits for recruits taking showers—in order to minimize, but not eliminate, adverse effects on training time. Base officials at most of the locations we visited attributed the deteriorated condition of the recruit barracks to recurring inadequate maintenance, which they ascribed to funding shortages that had occurred over the last 10 years. Without adequate maintenance, facilities tend to deteriorate more rapidly. In many cases that officials cited, they were focusing on emergency repairs and not performing routine preventative maintenance. Our analysis of cost data generated by DOD’s facility sustainment model showed, for example, that Fort Knox required about $38 million in fiscal year 2002 to sustain its base facilities. However, base officials told us they received about $10 million, or 26 percent, of the required funding. Officials at other Army basic training sites also told us that they receive less funding, typically 30 to 40 percent, than what they considered was required to sustain their facilities. Army officials told us that, over time, the maintenance funding shortfalls at their training bases have been caused primarily by the migration of funding from maintenance accounts to support other priorities, such as the training mission. While most barracks across the services had significant deficiencies, others were in better condition, primarily because they had recently been constructed or renovated. Those barracks that we observed to be in better condition were scattered throughout the Army, Air Force, and Marine Corps locations. Even at those locations where some barracks were in very poor condition, we occasionally observed other barracks in much better condition. For example, at Parris Island, the Marine Corps recently completed construction of a new female recruit barracks. At Fort Jackson, the Army repaired windows, plumbing, and roofs in several “starship” barracks and similar repairs were underway in two other starships. Figures 5 and 6 show renovated bath areas at Lackland Air Force Base in Texas and the Marine Corps Recruit Depot at San Diego. The services’ approaches to recapitalize their recruit barracks vary and are influenced by their overall priorities to improve all facilities. The Marine Corps and Air Force are focusing primarily on renovating existing facilities while the Navy plans to construct all new recruit barracks. The Army also expects to renovate and construct recruit barracks, but the majority of the funding needed to support these efforts is not expected to be programmed and available until after 2008 because of the priority placed on improving bachelor enlisted quarters. Table 3 summarizes the services’ recapitalization plans. The Navy has placed a high priority on replacing its 16 recruit barracks by fiscal year 2009 at an estimated cost of $570 million using military construction funds. The Navy recently completed a new recruit reception barracks, and the Congress has approved funding for four additional barracks. Two barracks are under construction with occupancy expected later this year (see fig. 7), and the contract for 2 more barracks was awarded in May 2002. The Navy has requested funds for another 2 barracks in its fiscal year 2003 military construction budget submission and plans to request funds for the remaining 9 barracks in fiscal years 2004 through 2007. The Navy expects construction on the last barracks to be completed by 2009. Navy officials told us that other high-priority Navy-wide efforts (e.g., providing quality bachelor enlisted quarters and housing for sailors while ships are in homeport) could affect the Navy’s recapitalization efforts for recruit barracks. The Army projects an estimated $1.7 billion will be needed to renovate or replace much of its recruit training barracks, but most of the work is long- term over the next 20 years, primarily because renovating and replacing bachelor enlisted quarters has been a higher priority in the near-term. Through fiscal year 2003, the Army expects to spend about $154 million for 2 new barracks—1 each at Fort Jackson and Fort Leonard Wood. Army officials stated that barracks at these locations were given priority over other locations because of capacity shortfalls at these installations. After fiscal year 2003, the Army estimates spending nearly $1.6 billion in military construction funds to recapitalize other recruit barracks—about $359 million to renovate existing barracks at several locations and about $1.2 billion to build new barracks at all locations, except Fort Sill. Only Forts Jackson and Leonard Wood are expected to receive funding for new barracks through fiscal year 2007. Further, the Army does not expect to begin much additional work until after 2008, when it expects to complete the renovation or replacement of bachelor enlisted quarters. As a result, Army officials stated that the remaining required funding for recruit barracks would most likely be requested between 2009 and 2025. The Marine Corps has a more limited recruit barracks recapitalization program, primarily because it has placed a high priority on renovating or replacing bachelor enlisted quarters in the near-term. The three recruit training installations plan to renovate their existing recruit barracks and construct two additional barracks at Parris Island and San Diego. The Marine Corps expects to spend about $40 million in operation and maintenance funds to renovate existing barracks at its training locations by fiscal year 2004. The renovations include replacing the bath and shower facilities, replacing hot water and heating and air conditioning systems, and upgrading the electrical systems. The Marine Corps also expects to spend at least $16 million in military construction for the new barracks by fiscal year 2009. The Air Force has placed a high priority on renovating, rather than replacing its recruit barracks in the near-term. It expects to spend about $89 million—primarily operation and maintenance funds— to renovate its existing barracks and convert another facility for use as a recruit barracks. As of April 2002, the Air Force had renovated 5 of its existing 8 barracks and expected to complete the remaining renovations by 2006. The renovations include upgrading heating, ventilation, and air-conditioning systems as well as installing new windows and improving the central baths. Due to expected increases in the number of recruits, the Air Force has also identified an additional building to be renovated for use as a recruit barracks. The Air Force intends to complete this renovation in fiscal year 2003. Officials at Lackland Air Force Base stated they are currently drafting a new base master plan, which identifies the need to build new recruit barracks starting around 2012. We requested comments on a draft of this report from the Secretary of Defense. An official from the Office of the Deputy Under Secretary of Defense (Installations & Environment) orally concurred with the information in our report and provided technical comments that we incorporated as appropriate. We performed our work at the Office of the Secretary of Defense and the headquarters of each military service. We also visited each military installation that conducts recruit basic training—Fort Jackson, South Carolina; Fort Benning, Georgia; Fort Knox, Kentucky; Fort Leonard Wood, Missouri; Fort Sill Oklahoma; Great Lakes Naval Training Center, Illinois; Lackland Air Force Base, Texas; Marine Corps Recruit Deport, Parris Island, South Carolina; Marine Corps Recruit Depot, San Diego, California; and Camp Pendleton, California. In discussing recruit barracks, we included barracks used to house recruits attending the Army’s One Station Unit Training. This training, which is conducted at select basic training locations for recruits interested in specific military occupational specialties, combines basic training with advanced individual training into one continuous course. To assess the physical condition of recruit barracks, we reviewed the fiscal year 2000 and 2001 installation readiness reports and supporting documentation for the ten installations that conduct basic training. We also toured several barracks at each installation and photographed conditions of the barracks. Finally, we interviewed officials at the services’ headquarters and each installation regarding the process used to inspect facilities, collect information to support the condition rating, and the underlying reasons for the current condition of the facilities. To determine the services’ plans to sustain and recapitalize recruit barracks, we reviewed the services’ plans for renovating its existing barracks and constructing new barracks. In addition, we interviewed officials in the headquarters of each service responsible for managing installations and programming operation and maintenance and military construction funds. We conducted our work from March through May 2002 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Director, Office and Management and Budget. In addition, the report will available at no charge on GAO’s Web site at www.gao.gov and to others upon request. Please contact me on (202) 512-8412 if you or your staff have any questions regarding this report. Key contributors to this report were Michael Kennedy, James Reifsnyder, Richard Meeks, Laura Talbott, and R.K. Wild. The military services conduct recruit basic training at ten installations in the United States. The Army has the most locations—five, with Fort Jackson, South Carolina, training the most Army recruits. The Marine Corps conducts its training at two primary locations—Parris Island, South Carolina on the east coast and San Diego in the west. Further, about 4 weeks (consisting of weapons qualification and field training exercises) of the Marine Corps’ 12-week basic training course at San Diego is conducted at Camp Pendleton because of training space limitations at its San Diego location. The Navy and Air Force conduct their basic training at one location each—Great Lakes, Illinois, and Lackland Air Force Base in San Antonio, Texas, respectively. Under DOD’s installation readiness reporting system, military installation facilities are grouped into nine separate facility classes. Recruit barracks are part of the “community and housing” facility class. Figure 9 depicts the fiscal year 2001 C-ratings for each of the nine facility classes, as well as for the recruit barracks component of the “community and housing” facility class, at each basic training location. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. | The Department of Defense reports that is has been faced with difficulties adequately maintaining its facilities to meet mission requirements. Facilities have been aging and deteriorating as funds needed to sustain and recapitalize the facilities have fallen short of requirements. GAO's review of the services' condition assessments in conjunction with visits to the basic training locations showed that most barracks were in need of significant repair, although some barracks were in better condition than others. GAO found that the exteriors of each service's barracks were generally in good condition and presented an acceptable appearance, but the barracks' infrastructure often had persistent repair problems because of inadequate maintenance. The services' approaches to recapitalize their recruit barracks vary and are influenced by their overall priorities to improve all facilities. Although the Navy, Air Force, and Marine Corps are addressing many of their recapitalization needs in the near-term, most of the Army's plans are longer term. |
In October 2008, we reported that Interior’s policies and practices for identifying and evaluating lease parcels and bids differ in key ways depending on whether the lease is located offshore—and therefore overseen by OEMM—or onshore—and therefore overseen by BLM. Identifying lease parcels. OEMM’s and BLM’s methods for identifying areas to lease vary significantly. Specifically: For offshore leases, OEMM—pursuant to the Outer Continental Shelf Lands Act—lays out 5-year strategic plans for the areas it plans to lease and establishes a schedule for offering leases. In addition, OEMM offers all leases for competitive bidding, and all eligible companies may submit written sealed bids, referred to as bonus bids, for the rights to explore, develop, and produce oil and gas resources on these leases, including drilling test wells. For onshore leases, BLM—which must follow the Federal Onshore Oil and Gas Leasing Reform Act of 1987—is not required to develop a long-term leasing plan and instead relies in part on the industry and the public to nominate areas for leasing. In some cases, BLM, like OEMM, offers leases through a competitive bidding process, but with bonus bids received in an oral auction rather than in a sealed written form. Evaluating bids. OEMM and BLM differ in their regulations and policies for evaluating whether the bids received for areas offered for lease are sufficient. For offshore leases, OEMM compares sealed bids with its own independent assessment of the value of the potential oil and gas in each lease. After the bids are received, OEMM—using a team of geologists, geophysicists, and petroleum engineers assisted by a software program— conducts a technical assessment of the potential oil and gas resources associated with the lease and other factors to develop an estimate of their fair market value. This estimate becomes the minimally acceptable bid and is used to evaluate the bids received. The bidder submitting the highest acceptable bonus bid that meets or exceeds OEMM’s estimate of the fair market value of a lease is awarded the lease. The primary term of the lease, which may be 5, 8, or 10 years, depends on the water depth of the leased area. If no bids equal or exceed the minimally acceptable bid, the lease is not awarded but is offered at a subsequent lease sale. According to OEMM, since 1995, the practice of rejecting bids that fall below the minimally acceptable bid and re-offering these leases at a later sale has resulted in an overall increase in bonus receipts of $373 million between 1997 and 2006. For onshore leases, BLM relies exclusively on competitors, participating in an oral auction, to determine the lease’s market value. Furthermore, BLM, unlike OEMM, does not currently employ a multidisciplinary team with the appropriate range of skills or appropriate software to develop estimates of the oil and gas reserves for each lease parcel, and thus, establish a market and resource-based minimum acceptable bid. Instead, BLM has established a uniform national minimum acceptable bid of at least $2 per acre and has taken the position that as long as at least one bid meets this $2 per acre threshold, the lease will be awarded to the highest bidder. Importantly, onshore leases that do not receive any bids in the initial offer are available noncompetitively the day after the lease sale and remain available for leasing for a period of 2 years after the competitive lease sale. Any of these available leases may be acquired on a first-come, first-served basis subject to payment of an administrative fee. Prior to 1992, BLM offered primary terms of 5 years for competitively sold leases and 10 years for leases issued noncompetitively. Since 1992, BLM has been required by law to only offer leases with 10-year primary terms whether leases are sold competitively or issued noncompetitively. Oil and gas activity has generally increased over the past 20 years, and our reviews have found that Interior has—at times—been unable to adequately oversee these activities: (1) completing environmental inspections; (2) verifying oil and gas production; (3) hiring, training, and retaining staff; (4) using categorical exclusions to streamline environmental analyses required for certain oil and gas activities; (5) performing environmental monitoring in accordance with land use plans; (6) conducting environmental analyses; and (7) responding to onshore lease protests. Specifically: Completing environmental inspections. In June 2005, we reported that with the increase in oil and gas activity, BLM had not consistently been able to complete its required environmental inspections—the primary mechanism to ensure that companies are complying with various environmental laws and lease stipulations. At the time of our review, BLM officials explained that because staff were spending increasing amounts of time processing drilling permits, they had less time to conduct environmental inspections. Verifying oil and gas production. In September 2008, we reported that neither BLM nor OEMM was meeting its statutory obligations or agency targets for inspecting certain leases and metering equipment used to measure oil and gas production, raising uncertainty about the accuracy of oil and gas measurement. For onshore leases, BLM only completed a portion of its production verification inspections because its workload had substantially grown in response to increases in onshore drilling. For offshore leases, OEMM only completed about 50 percent of its required production inspections in 2007 because of ongoing cleanup work related to Hurricane Katrina and Rita. Additionally, in March 2010, we found that Interior had not consistently updated its oil and gas measurement regulations. Specifically, OEMM has routinely reviewed and updated its measurement regulations, whereas BLM had not. Accordingly, OEMM had updated its measurement regulations six times since 1998, whereas BLM had not updated its measurement regulations since 1989. We made a number of recommendations to the Secretary of the Interior for improving oil and gas production verification, including providing for more regular updates of measurement regulations. Hiring, training, and retaining staff. In March 2010, we reported that Interior has faced difficulties in hiring, retaining, and training staff in key oil and gas oversight positions. Specifically, we found that staff within Interior’s program for verifying that oil and gas produced from federal leases are correctly measured—including petroleum engineers and inspectors—lacked critical skills because, according to agency officials, Interior (1) had difficulty in hiring experienced staff, (2) struggled to retain staff, and (3) did not consistently provided the appropriate training for staff. Interior’s challenges in hiring and retaining staff stem, in part, from competition with the oil and gas industry, which generally pays significantly more than the federal government. Moreover, key technical positions responsible for oversight of oil and gas activities have experienced high turnover rates, which, according to Interior officials, impede these employees’ capacity to oversee oil and gas activities. These positions included petroleum engineers, who process drilling permits and review oil and gas metering systems, and inspection staff—including BLM’s petroleum engineer technicians and production accountability technicians onshore—who conduct drilling, safety and oil and gas production verification inspections (see app. I). For example, we found that turnover rates for OEMM inspectors at the four district offices we reviewed between 2004 and 2008 ranged from 27 to 44 percent. Furthermore, Interior has not consistently provided training to the staff it has been able to hire and retain. For example, neither onshore nor offshore petroleum engineers had a requirement for training on the measurement of oil and gas, which is critical to accurate royalty collections and can be challenging at times because of such factors as the type of meter used, the specific qualities of the gas or oil being measured, and the rate of production. Additionally, although BLM offers a core curriculum for its petroleum engineer technicians and requires that they obtain official BLM certification and then be recertified once every 5 years to demonstrate continued proficiency, the agency has not offered a recertification course since 2002, negatively impacting its ability to conduct inspections. It is important to note that BLM’s petroleum engineer technicians are the eyes and ears for the agency—performing key functions and also perhaps the only Interior staff with direct contact with the lease property itself. We recommended that the Secretary of the Interior improve its training for staff responsible for verifying oil and gas production and to determine what policies are necessary to attract and retain qualified measurement staff at sufficient levels to ensure an effective production verification program. Using categorical exclusions. In September 2009, we reported that BLM’s use of categorical exclusions—authorized under section 390 of the Energy Policy Act of 2005 to streamline the environmental analysis required under the National Environmental Policy Act (NEPA) when approving certain oil and gas activities—had some benefits but raises numerous questions about how and when BLM should use these categorical exclusions. First, our analysis found that BLM used section 390 categorical exclusions to approve over one-quarter of its applications for drilling permits from fiscal years 2006 to 2008. While these categorical exclusions generally increased the efficiency of operations, some BLM field offices, such as those with recent environmental analyses already completed, were able to benefit more than others. Second, we found that BLM’s use of section 390 categorical exclusions was frequently out of compliance with both the law and agency guidance and that a lack of clear guidance and oversight by BLM were contributing factors. We found several types of violations of the law, such as approving more than one oil or gas well under a single decision document and drilling a new well after statutory time frames had lapsed. We also found examples, in 85 percent of field offices reviewed, where officials did not comply with agency guidance, most often by failing to adequately justify the use of a categorical exclusion. While many of these violations and noncompliance were technical in nature, others were more significant and may have thwarted NEPA’s twin aims of ensuring that BLM and the public are fully informed of environmental consequences of BLM’s actions. Third, we found that a lack of clarity in both section 390 of the act and BLM’s guidance has raised serious concerns. Specifically: (1) Fundamental questions about what section 390 categorical exclusions are and how they should be used have led to concerns that BLM may be using these categorical exclusions in too many—or too few—instances. For example, there is disagreement as to whether BLM must screen section 390 categorical exclusions for circumstances that would preclude their use or whether their use is mandatory. (2) Concerns about key concepts underlying the law’s description of these categorical exclusions have arisen—specifically, whether section 390 categorical exclusions allow BLM to exceed development levels, such as number of wells to be drilled, analyzed in supporting NEPA documents without conducting further analysis. (3) Definitions of key criteria in the law and BLM guidance are vague or nonexistent, which led to varied interpretations among field offices and concerns about misuse and a lack of transparency. We recommended that BLM take steps to improve the implementation of section 390 of the act by ensuring compliance through more oversight, standardizing decision documentation, and clarifying agency guidance. We also suggested that Congress may wish to consider amending the Energy Policy Act of 2005 to clarify and resolve some of the key issues identified in our report. Since the issuance of our report, BLM has taken steps to implement some of our recommendations. Performing environmental monitoring. In June 2005, we reported that four of the eight BLM field offices we visited had not developed any resource monitoring plans to help track management decisions and determine if desired outcomes had been achieved, including those related to mitigating the environmental impacts of oil and gas development. We concluded that without these plans, land managers may be unable to determine the effectiveness of various mitigation measures attached to drilling permits and decide whether these measures need to be modified, strengthened, or eliminated. Officials offered several reasons for not having these plans, including increased workload due to an increased number of drilling permits, as well as budget constraints. Conducting environmental analyses. In March 2010, we found that MMS faces challenges in the Alaska Outer Continental Shelf (OCS) Region in conducting reviews of oil and gas development under NEPA, which requires MMS to evaluate the likely environmental effects of proposed actions, including oil and gas development. Although Interior policy directed its agencies to prepare handbooks providing guidance on how to implement NEPA, we found that MMS lacked such a handbook. The lack of comprehensive guidance in a handbook, combined with high staff turnover in recent years, left the process for meeting NEPA requirements ill defined for the analysts charged with developing NEPA documents. It also left unclear MMS’s policy on what constitutes a significant environmental impact as well as its procedures for conducting and documenting NEPA-required analyses to address environmental and cultural sensitivities, which have often been the topic of litigation over Alaskan offshore oil and gas development. We also found that the Alaska OCS Region shared information selectively, a practice that was inconsistent with agency policy, which directed that information, including proprietary data from industry, be shared with all staff involved in environmental reviews. According to regional MMS staff, this practice has hindered their ability to complete sound environmental analyses under NEPA. We recommended that the Secretary of the Interior develop and set a deadline for issuing a comprehensive NEPA handbook providing guidance on how to implement NEPA. Responding to lease protests. In preliminary results from our ongoing work on public challenges to BLM’s federal oil and gas lease sale decisions in the four Mountain West states responsible for most onshore federal oil and gas development, we found the extent to which BLM made publicly available information related to public protests filed during the leasing process varied by state and was generally limited in scope. We also found that stakeholders—nongovernmental organizations representing environmental, recreational, and hunting interests that filed protests to BLM lease offerings—wanted additional time to participate in the leasing process and more information from BLM about its leasing decisions. In May 2010, the Secretary of the Interior announced several agencywide leasing reforms that are to take place at BLM, some of which may address concerns raised by these stakeholder groups. For instance, BLM state offices are to provide an additional public review and comment opportunity during the leasing process. They are also required to post on their Web sites their responses to letters filed in protest of state office decisions to offer specific parcels of land for oil and gas development. In our past work, we have identified several areas where Interior may be missing opportunities to increase revenue by fundamentally shifting the terms of federal oil and gas leases. As we reported in September 2008, (1) federal oil and gas leasing terms currently result in the U.S. government receiving one of the smallest shares of oil and gas revenue when compared to other countries and (2) Interior’s inflexible royalty rate structure has put pressure on Interior and Congress to periodically change royalty rates. We also reported that Interior is doing far less than some states to encourage development of leases. Specifically: The U.S. government receives one of the lowest shares of revenue for oil and gas resources compared with other countries and resource owners. For example, we reported the results of a private study in 2007 showing that the revenue share the U.S. government collects on oil and gas produced in the Gulf of Mexico ranked 93rd lowest of the 104 revenue collection regimes around the world covered by the study. Further, the study showed that some countries recently increased their shares of revenues as oil and gas prices rose and, as a result, will collect between an estimated $118 billion and $400 billion, depending on future oil and gas prices. However, despite significant changes in the oil and gas industry over the past several decades, we found that Interior has not systematically re-examined how the U.S. government is compensated for extraction of oil and gas for over 25 years. Since 1980—in part due to Interior’s inflexible royalty rate structure— Congress and Interior have been pressured, with varying success—to periodically adjust royalty rates to respond to current market conditions. For example, in 1980, a time when oil prices were high compared to today’s prices, in inflation-adjusted terms, Congress passed a windfall profit tax, which it later repealed in 1988 after oil prices fell significantly from their 1980 level. Later, in November 1995—during a period with relatively low oil and gas prices—the federal government enacted the Outer Continental Shelf Deep Water Royalty Relief Act (DWRRA) which provided for “royalty relief,” the suspension of royalties on certain volumes of initial production, for certain leases in the Gulf of Mexico in depths greater than 200 meters during the 5 years after passage of the act—1996 through 2000. For leases issued during these 5 years, litigation established that MMS lacked the authority under the act to impose thresholds. As a result, companies are now receiving royalty relief even though prices are much higher than at the time the DWRRA was enacted. In June 2008, we estimated that future foregone royalties from all the DWRRA leases issued from 1996 through 2000 could range widely—from a low of about $21 billion to a high of $53 billion. Finally, in 2007, the Secretary of the Interior twice increased the royalty rate for future Gulf of Mexico leases. In January, the rate for deep-water leases was raised to 16- 2/3 percent. Later, in October, the rate for all future leases in the Gulf, including those issued in 2008, was raised to 18-3/4 percent. Interior estimated these actions will increase federal oil and gas revenues by $8.8 billion over the next 30 years. The January 2007 increase applied only to deep-water Gulf of Mexico leases; the October 2007 increase applied to all water depths in the Gulf of Mexico. We concluded that these royalty rate increases appeared to be a response by Interior to the high prices of oil and gas that have led to record industry profits and raised questions about whether the existing federal oil and gas fiscal system gives the public an appropriate share of revenues from oil and gas produced on federal lands and waters. Furthermore, the royalty rate increases do not address industry profits from existing leases. Existing leases, with lower royalty rates, will likely remain highly profitable as long as they produce oil and gas or until oil and gas prices fall significantly. In addition, in choosing to increase royalty rates, Interior did not evaluate the entire oil and gas fiscal system to determine whether these increases were sufficient to balance investment attractiveness and appropriate returns to the federal government for oil and gas resources. On the other hand, according to Interior, it did consider factors such as industry costs for outer continental shelf exploration and development, tax rates, rental rates, and expected bonus bids. Further, because the new royalty rates are not flexible with respect to oil and gas prices, Interior and Congress may again be under pressure from industry or the public to further change the royalty rates if and when oil and gas prices either fall or rise. Finally, these past royalty changes only affect Gulf of Mexico leases and do not address onshore leases. To address weaknesses in Interior’s royalty program, we suggested that Congress may wish to consider directing the Secretary of the Interior to convene an independent panel to perform a comprehensive review of the federal oil and gas fiscal system and direct MMS and other relevant agencies within Interior to establish procedures for periodically collecting data and information and conducting analyses to determine how the federal government take and the attractiveness for oil and gas investors in each federal oil and gas region compare to those of other resource owners and report this information to Congress. Interior officials recently reported that the department is currently undertaking an examination of this issue. OEMM and BLM vary in the extent to which they encourage development of federal leases, and both agencies do less than some states and private landowners to encourage lease development. As a result, we concluded that Interior may be missing opportunities to increase domestic oil and gas production and revenues. Specifically, in the Gulf of Mexico, OEMM varies the lease length in accordance with the depth of water over which the lease is situated. For example, leases issued in shallow water depths typically have terms of 5 years, whereas leases in the deepest areas of the Gulf of Mexico have 10-year primary terms. This is because shallower water tends to be nearer to shore and to be adjacent to already developed areas with pipeline infrastructure in place, while deeper water tends to be further out, have less available infrastructure to link to, and generally present greater challenges associated with the depth of the wells themselves. In contrast to OEMM’s depth-based lease terms, BLM issues leases with 10-year primary terms, regardless of whether the lease is adjacent to a fully developed field with the necessary pipeline infrastructure to carry the product to market or in a remote location with no surrounding infrastructure. Furthermore, BLM also uses 10-year primary terms in the National Petroleum Reserve-Alaska, where it is significantly more difficult to develop oil fields because of factors including the harsh environment. We also examined selected states and private landowners that lease land for oil and gas development and found that some do more than Interior to encourage lease development. For example, to provide a greater financial incentive to develop leased land, the state of Texas allows lessees to pay a 20 percent royalty rate for the life of the lease if production occurs in the first 2 years of the lease, as compared to 25 percent if production occurs after the 4th year. In addition, we found that some states and private landowners also do more to structure leases to reflect the likelihood of finding oil and gas. For example, New Mexico issues shorter leases and can require lessees to pay higher royalties for properties that are in or near known producing areas, and allow longer leases and lower royalty rates in areas believed to be more speculative. Officials from one private landowners’ association told us that they too are using shorter lease terms, ranging from 6 months to 3 years, to ensure that lessees are diligent in developing any potential oil and gas resources on their land. Louisiana and Texas also issue 3-year onshore leases. While the existence of lease terms that appear to encourage faster development of some oil and gas leases suggests a potential for the federal government to take steps, it is important to note that it can take several years to complete the required environmental analyses needed in order to receive approval to begin drilling on federal lands. To address what we believe are key weaknesses in Interior’s royalty program while acknowledging potential differences between federal, state, and private leases, we recommended that the Secretary of the Interior develop a strategy to evaluate options to encourage faster development of oil and gas leases on federal lands, including determining whether methods to differentiate between leases according to the likelihood of finding economic quantities of oil or gas and whether some of the other methods states use could effectively be employed, either across all federal leases or in a targeted fashion. In so doing, Interior should identify any statutory or other obstacles to using such methods and report the findings to Congress. Interior officials recently reported that the department is currently undertaking an examination of this issue. Our past work has identified shortcomings in Interior’s IT systems for managing oil and gas royalty and production information. In September 2008, we reported that Interior’s oil and gas IT systems did not include several key functionalities, including (1) limiting a company’s ability to make adjustments to self-reported data after an audit had occurred and (2) identifying missing royalty reports. MMS’s ability to maintain the accuracy of production and royalty data has been hampered because companies can make adjustments to their previously entered data without prior MMS approval. Companies may legally make changes to both royalty and production data in MMS’s royalty IT system for up to 6 years after the initial reporting month, and these changes may necessitate changes in the royalty payment. However, at the time of our review, MMS’s royalty IT system allowed companies to make adjustments to their data beyond the allowed 6-year time frame. As a result of the companies’ ability to make these retroactive changes, within or outside of the 6-year time frame, the production data and required royalty payments could change over time—even after MMS completes an audit—complicating efforts by agency officials to reconcile production data and ensure that the proper royalties were paid. MMS’s royalty IT system’s inability to automatically detect instances when a royalty payor fails to submit the required royalty report in a timely manner. Because MMS’s royalty system did not detect instances when a payor failed to submit a payment in a timely manner, we found that cases in which a company stops filing royalty reports and stops paying royalties may not be detected until more than 2 years after the initial reporting date, when MMS’s royalty IT system completes a reconciliation of volumes reported on the production reports with the volumes on their associated royalty reports. Therefore, it was possible under MMS’s strategy that the royalty IT system would not identify instances in which a payor stopped reporting until several years after the report is due. This created an unnecessary risk that MMS was not collecting accurate royalties in a timely manner. To address these weaknesses, we recommended that the Secretary of the Interior, among other things finalize the adjustment line monitoring specifications for modifying its royalty IT system and fully implement the IT system so that MMS can monitor adjustments made outside the 6-year time frame, and ensure that any adjustments made to production and royalty data after compliance work has been completed are reviewed by appropriate staff, and develop processes and procedures by which MMS can automatically identify when an expected royalty report has not been filed in a timely manner and contact the company to ensure it is complying with both applicable laws and agency policies. Since September 2008, MMS has made improvements in its IT systems for identifying missing royalty reports, but it is too early to assess their effectiveness. Additionally, in July 2009, we reported that MMS’s IT system lacked sufficient controls to ensure that royalty payment data were accurate. While much of the royalty data we examined from fiscal years 2006 and 2007 were reasonable, we found significant instances where data were missing or appeared erroneous. For example, we examined gas leases in the Gulf of Mexico and found that, about 5.5 percent of the time, lease operators reported production, but royalty payors did not submit the corresponding royalty reports, potentially resulting in $117 million in uncollected royalties. We also found that a small percentage of royalty payors reported negative royalty values, something that should not happen, potentially costing $41 million in uncollected royalties. In addition, royalty payors claimed gas processing allowances 2.3 percent of the time for unprocessed gas, potentially resulting in $2 million in uncollected royalties. Furthermore, we found significant instances where royalty payor-provided data on royalties paid and the volume and or the value of the oil and gas produced appeared erroneous because they were outside the expected ranges. To address control weaknesses, we made a number of recommendations to MMS intended to improve the quality of royalty data by improving its IT systems’ edit checks, among other things. Moreover, in our March 2010 report, we found that Interior’s longstanding efforts to implement two key IT systems for facilitating verification of produced volumes of oil and gas from federal leases were behind schedule and years from widespread adoption. For example, Interior’s efforts to provide its inspection staff with mobile computing capabilities for use in the field are moving slowly and are years from full implementation. Interior inspectors continue to rely on documenting inspection results on paper, and later reentering these results into Interior databases. Specifically, BLM and OEMM are independently developing the capacity for inspection staff to (1) electronically document inspection results and (2) access reference documents, such as American Petroleum Institute standards and measurement regulations, via laptops while in the field. BLM initiated work on developing this capacity in 2001, whereas OEMM is now in the preliminary planning stages of a similar effort. According to Interior officials, widespread implementation of a mobile computing tool to assist with production verification and other types of inspections, potentially including drilling and safety, is still several years away. Interior officials said having such a tool would allow inspection staff to not only easily reference technical documents while conducting inspections to verify compliance with regulations but also to document the results of those inspections while in the field and subsequently upload them to Interior databases. Similarly, BLM’s efforts to use gas production data acquired remotely from gas wells through its Remote Data Acquisition for Well Production (RDAWP) program to facilitate production inspections have shown few results after 5 years of funding and at least $1.5 million spent. At the time of our review, we found that BLM was only receiving production data from approximately 50 wells via this program, and it had yet to use the data to complete a production inspection, making it difficult to assess its utility. To address these shortcomings, we made a number of recommendations to the Secretary including that BLM reassess its current commitment to the RDAWP program in light of other commercially available software and to implement a mobile computing solution for the onshore inspection and enforcement staff and to coordinate with the offshore inspection and enforcement staff as appropriate. In conclusion, over the past several years, we and others have found Interior to be in need of fundamental reform. This past work has found weaknesses across a wide range of Interior’s oversight of onshore and offshore oil and gas development. Secretary Salazar has taken notable steps to begin comprehensive evaluations of leasing rules and practices as well as the amount and ways in which the federal government collects revenues. Interior is also currently implementing a number of our recommendations aimed at making improvements within the existing organization of Interior’s functions. As the Secretary and Congress consider what fundamental changes are needed in how Interior structures its oversight of oil and gas programs, we believe that our and others’ past work provides a strong rationale for broad reform of the agency’s oil and gas oversight functions—at MMS to be sure, but also across other parts of Interior, including those responsible for oversight of onshore areas. If steps are not taken to ensure effective independent oversight, we are concerned about the agency’s ability to manage the nation’s oil and gas resources, ensure the safe operation of onshore and offshore leases, provide adequate environmental protection, and provide reasonable assurance that the U.S. government is collecting the revenue to which it is entitled. Reorganization and fundamental change can be very difficult for an organization. We believe that regardless of how MMS is ultimately reorganized, Interior’s top leadership must also address the wide range of outstanding recommendations for any reorganization effort to be effective. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions that you or other Members of the Committee may have at this time. For further information on this statement, please contact Frank Rusco at (202) 512-3841 or ruscof@gao.gov. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. Other staff that made key contributions to this testimony include, Ron Belak, Glenn C. Fischer, Jon Ludwigson, Ben Shouse, Kiki Theodoropoulos, and Barbara Timmerman. Oil and Gas Management: Key Elements to Consider for Providing Assurance of Effective Independent Oversight, GAO-10-852T, (Washington, D.C.: June 17, 2010). Oil and Gas Management: Interior’s Oil and Gas Production Verification Efforts Do Not Provide Reasonable Assurance of Accurate Measurement of Production Volumes, GAO-10-313, (Washington, D.C.: Mar. 15, 2010). Offshore Oil and Gas Development: Additional Guidance Would Help Strengthen the Minerals Management Service’s Assessment of Environmental Impacts in the North Aleutian Basin, GAO-10-276, (Washington, D.C.: Mar. 8, 2010). Energy Policy Act of 2005: Greater Clarity Needed to Address Concerns with Categorical Exclusions for Oil and Gas Development under Section 390 of the Act, GAO-09-872, (Washington, D.C.: Sept. 26, 2009). Federal Oil and Gas Management: Opportunities Exist to Improve Oversight, GAO-09-1014T, (Washington, D.C.: Sept. 16, 2009). Royalty-In-Kind Program: MMS Does Not Provide Reasonable Assurance It Receives Its Share of Gas, Resulting in Millions in Forgone Revenue, GAO-09-744, (Washington, D.C.: Aug. 14, 2009). Mineral Revenues: MMS Could Do More to Improve the Accuracy of Key Data Used to Collect and Verify Oil and Gas Royalties, GAO-09-549, (Washington, D.C.: July 15, 2009). Strategic Petroleum Reserve: Issues Regarding the Inclusion of Refined Petroleum Products as Part of the Strategic Petroleum Reserve, GAO-09-695T, (Washington, D.C.: May 12, 2009). Oil and Gas Management: Federal Oil and Gas Resource Management and Revenue Collection In Need of Stronger Oversight and Comprehensive Reassessment, GAO-09-556T, (Washington, D.C.: Apr. 2, 2009). Oil and Gas Leasing: Federal Oil and Gas Resource Management and Revenue Collection in Need of Comprehensive Reassessment, GAO-09-506T, (Washington, D.C.: Mar. 17, 2009). Department of the Interior, Minerals Management Service: Royalty Relief for Deepwater Outer Continental Shelf Oil and Gas Leases— Conforming Regulations to Court Decision, GAO-09-102R, (Washington, D.C.: Oct. 21, 2008). Oil and Gas Leasing: Interior Could Do More to Encourage Diligent Development, GAO-09-74, (Washington, D.C.: Oct. 3, 2008). Oil and Gas Royalties: MMS’s Oversight of Its Royalty-in-Kind Program Can Be Improved through Additional Use of Production Verification Data and Enhanced Reporting of Financial Benefits and Costs, GAO-08-942R, (Washington, D.C.: Sept. 26, 2008). Mineral Revenues: Data Management Problems and Reliance on Self- Reported Data for Compliance Efforts Put MMS Royalty Collections at Risk, GAO-08-893R, (Washington, D.C.: Sept. 12, 2008). Oil and Gas Royalties: The Federal System for Collecting Oil and Gas Revenues Needs Comprehensive Reassessment, GAO-08-691, (Washington, D.C.: Sept. 3, 2008). Oil and Gas Royalties: Litigation over Royalty Relief Could Cost the Federal Government Billions of Dollars, GAO-08-792R, (Washington, D.C.: June 5, 2008). Strategic Petroleum Reserve: Improving the Cost-Effectiveness of Filling the Reserve, GAO-08-726T, (Washington, D.C.: Apr. 24, 2008). Mineral Revenues: Data Management Problems and Reliance on Self- Reported Data for Compliance Efforts Put MMS Royalty Collections at Risk, GAO-08-560T, (Washington, D.C.: Mar. 11, 2008). Strategic Petroleum Reserve: Options to Improve the Cost- Effectiveness of Filling the Reserve, GAO-08-521T, (Washington, D.C.: Feb. 26, 2008). Oil and Gas Royalties: A Comparison of the Share of Revenue Received from Oil and Gas Production by the Federal Government and Other Resource Owners, GAO-07-676R, (Washington, D.C.: May 1, 2007). Oil and Gas Royalties: Royalty Relief Will Cost the Government Billions of Dollars but Uncertainty Over Future Energy Prices and Production Levels Make Precise Estimates Impossible at this Time, GAO-07-590R, (Washington, D.C.: Apr. 12, 2007). Royalties Collection: Ongoing Problems with Interior’s Efforts to Ensure A Fair Return for Taxpayers Require Attention, GAO-07-682T, (Washington, D.C.: Mar. 28, 2007). Oil and Gas Royalties: Royalty Relief Will Likely Cost the Government Billions, but the Final Costs Have Yet to Be Determined, GAO-07-369T, (Washington, D.C.: Jan. 18, 2007). Strategic Petroleum Reserve: Available Oil Can Provide Significant Benefits, but Many Factors Should Influence Future Decisions about Fill, Use, and Expansion, GAO-06-872, (Washington, D.C.: Aug. 24, 2006). Royalty Revenues: Total Revenues Have Not Increased at the Same Pace as Rising Oil and Natural Gas Prices due to Decreasing Production Sold, GAO-06-786R, (Washington, D.C.: June 21, 2006). Oil and Gas Development: Increased Permitting Activity Has Lessened BLM’s Ability to Meet Its Environmental Protection Responsibilities, GAO-05-418, (Washington, D.C.: June 17, 2005). Mineral Revenues: Cost and Revenue Information Needed to Compare Different Approaches for Collecting Federal Oil and Gas Royalties, GAO-04-448, (Washington, D.C.: Apr. 16, 2004). This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The catastrophic oil spill in the Gulf of Mexico has drawn attention to the exploration and production of oil and gas from leases on federal lands and waters. The Department of the Interior oversees oil and gas activities on federal lands and waters. Onshore, the Bureau of Land Management (BLM) has oversight responsibilities. Offshore, the newly created Bureau of Ocean Energy Management, Regulation, and Enforcement (BOEMRE), has oversight responsibilities. Prior to BOEMRE, the Minerals Management Service's (MMS) Offshore Energy and Minerals Management oversaw offshore oil and gas activities, while MMS's Minerals Revenue Management collected revenues from oil and gas produced. For the purposes of our testimony today, we present our findings in accordance with Interior's organizational structure prior to establishing BOEMRE. Over the past 5 years, GAO has issued numerous recommendations to the Secretary of the Interior to improve the agency's management of oil and gas resources--most recently in two reports issued in March 2010. Overall, GAO's work in this area can be useful in evaluating potential strategies for reorganizing and improving oil and gas management at Interior. Specifically, GAO's work can assist the Secretary and Congress as they are considering restructuring Interior's oversight of oil and gas development and production, revenue collection, and information technology (IT) systems. GAO's recent evaluations of federal oil and gas management have identified key areas where Interior could provide more effective oversight. In October 2008, GAO reported that Interior policies and practices for leasing offshore and onshore oil and gas differed in key ways. Considering the ways that areas are selected for leasing, GAO found that MMS sets out a 5-year strategic plan identifying both a leasing schedule and the offshore areas it will lease. In contrast, BLM relies on industry and others to nominate onshore areas for leasing, then selects lands to lease from these nominations and from areas it has identified. Oil and gas activity has generally increased in recent years, and Interior has at times been unable to meet its legal and agency mandated oversight obligations in key areas. For example, in a June 2005 report, GAO found that Interior was unable to complete its environmental inspections because of increased onshore drilling activity. GAO also found in a September 2008 review that Interior was not consistently completing inspections to verify oil and gas volumes produced from federal leases. GAO found in a March 2010 report that MMS faces challenges conducting required environmental reviews in Alaska. In particular, MMS has no handbook providing guidance on how to conduct these reviews, although Interior policy directs it to prepare one. Interior may be missing opportunities to fundamentally shift the terms of federal oil and gas leases and increase revenues. In a September 2008 report, GAO reported that, compared to other countries, the United States receives one of the lowest shares of revenue for oil and gas. In addition, Interior's royalty rate, which does not change to reflect changing prices and market conditions, has at times led to pressure on Interior and Congress to periodically change royalty rates in response to market conditions. Interior also has done less than some states and private landowners to encourage lease development and may be missing opportunities to increase production revenues. Interior began studying ways to improve revenue collection and leasing practices earlier this year. Interior's oil and gas IT systems lack key functionalities. A September 2008 GAO review found that MMS's ability to maintain the accuracy of oil and gas production and royalty data was hampered by two key limitations in its IT system: (1) it did not limit companies' ability to adjust self-reported data after MMS had audited them and (2) it did not identify missing royalty reports. More recently, a March 2010 report found that Interior's long-standing efforts to implement two key technologies for verifying oil and gas production are behind schedule and years from widespread adoption. |
The public faces a high risk that critical services provided by the government and the private sector could be severely disrupted by the Year 2000 computing crisis. Financial transactions could be delayed, flights grounded, power lost, and national defense affected. Moreover, America’s infrastructures are a complex array of public and private enterprises with many interdependencies at all levels. These many interdependencies among governments and within key economic sectors could cause a single failure to have adverse repercussions. Key economic sectors that could be seriously affected if their systems are not Year 2000 compliant include information and telecommunications; banking and finance; health, safety, and emergency services; transportation; power and water; and manufacturing and small business. The information and telecommunications sector is especially important. In testimony in June, we reported that the Year 2000 readiness of the telecommunications sector is one of the most crucial concerns to our nation because telecommunications are critical to the operations of nearly every public-sector and private-sector organization. For example, the information and telecommunications sector (1) enables the electronic transfer of funds, the distribution of electrical power, and the control of gas and oil pipeline systems, (2) is essential to the service economy, manufacturing, and efficient delivery of raw materials and finished goods, and (3) is basic to responsive emergency services. Reliable telecommunications services are made possible by a complex web of highly interconnected networks supported by national and local carriers and service providers, equipment manufacturers and suppliers, and customers. In addition to the risks associated with the nation’s key economic sectors, one of the largest, and largely unknown, risks relates to the global nature of the problem. With the advent of electronic communication and international commerce, the United States and the rest of the world have become critically dependent on computers. However, there are indications of Year 2000 readiness problems in the international arena. For example, a June 1998 informal World Bank survey of foreign readiness found that only 18 of 127 countries (14 percent) had a national Year 2000 program, 28 countries (22 percent) reported working on the problem, and 16 countries (13 percent) reported only awareness of the problem. No conclusive data were received from the remaining 65 countries surveyed (51 percent). In addition, a survey of 15,000 companies in 87 countries by the Gartner Group found that the United States, Canada, the Netherlands, Belgium, Australia, and Sweden were the Year 2000 leaders, while nations including Germany, India, Japan, and Russia were 12 months or more behind the United States. The Gartner Group’s survey also found that 23 percent of all companies (80 percent of which were small companies) had not started a Year 2000 effort. Moreover, according to the Gartner Group, the “insurance, investment services and banking are industries furthest ahead. Healthcare, education, semiconductor, chemical processing, agriculture, food processing, medical and law practices, construction and government agencies are furthest behind. Telecom, power, gas and water, software, shipbuilding and transportation are laggards barely ahead of furthest-behind efforts.” The following are examples of some of the major disruptions the public and private sectors could experience if the Year 2000 problem is not corrected. Unless the Federal Aviation Administration (FAA) takes much more decisive action, there could be grounded or delayed flights, degraded safety, customer inconvenience, and increased airline costs. Aircraft and other military equipment could be grounded because the computer systems used to schedule maintenance and track supplies may not work. Further, the Department of Defense (DOD) could incur shortages of vital items needed to sustain military operations and readiness. Medical devices and scientific laboratory equipment may experience problems beginning January 1, 2000, if the computer systems, software applications, or embedded chips used in these devices contain two-digit fields for year representation. According to the Basle Committee on Banking Supervision—an international committee of banking supervisory authorities—failure to address the Year 2000 issue would cause banking institutions to experience operational problems or even bankruptcy. Recognizing the seriousness of the Year 2000 problem, on February 4, 1998, the President signed an executive order that established the President’s Council on Year 2000 Conversion led by an Assistant to the President and composed of one representative from each of the executive departments and from other federal agencies as may be determined by the Chair. The Chair of the Council was tasked with the following Year 2000 roles: (1) overseeing the activities of agencies, (2) acting as chief spokesperson in national and international forums, (3) providing policy coordination of executive branch activities with state, local, and tribal governments, and (4) promoting appropriate federal roles with respect to private-sector activities. Addressing the Year 2000 problem in time will be a tremendous challenge for the federal government. Many of the federal government’s computer systems were originally designed and developed 20 to 25 years ago, are poorly documented, and use a wide variety of computer languages, many of which are obsolete. Some applications include thousands, tens of thousands, or even millions of lines of code, each of which must be examined for date-format problems. The federal government also depends on the telecommunications infrastructure to deliver a wide range of services. For example, the route of an electronic Medicare payment may traverse several networks—those operated by the Department of Health and Human Services, the Department of the Treasury’s computer systems and networks, and the Federal Reserve’s Fedwire electronic funds transfer system. In addition, the year 2000 could cause problems for the many facilities used by the federal government that were built or renovated within the last 20 years and contain embedded computer systems to control, monitor, or assist in operations. For example, building security systems, elevators, and air conditioning and heating equipment could malfunction or cease to operate. Agencies cannot afford to neglect any of these issues. If they do, the impact of Year 2000 failures could be widespread, costly, and potentially disruptive to vital government operations worldwide. Nevertheless, overall, the government’s 24 major departments and agencies are making slow progress in fixing their systems. In May 1997, the Office of Management and Budget (OMB) reported that about 21 percent of the mission-critical systems (1,598 of 7,649) for these departments and agencies were Year 2000 compliant. A year later, in May 1998, these departments and agencies reported that 2,914 of the 7,336 mission-critical systems in their current inventories, or about 40 percent, were compliant. However, unless agency progress improved dramatically, a substantial number of mission-critical systems will not be compliant in time. In addition to slow governmentwide progress in fixing systems, our reviews of federal agency Year 2000 programs have found uneven progress. Some agencies are significantly behind schedule and are at high risk that they will not fix their systems in time. Other agencies have made progress, although risks continue and a great deal of work remains. The following are examples of the results of some of our recent reviews. Last month, we testified about FAA’s progress in implementing a series of recommendations we had made earlier this year to assist FAA in completing overdue awareness and assessment activities. These recommendations included assessing how the major FAA components and the aviation industry would be affected if Year 2000 problems were not corrected in time and completing inventories of all information systems, including data interfaces. Officials at both FAA and the Department of Transportation agreed with these recommendations, and the agency has made progress in implementing them. In our August testimony, we reported that FAA had made progress in managing its Year 2000 problem and had completed critical steps in defining which systems needed to be corrected and how to accomplish this. However, with less than 17 months to go, FAA must still correct, test, and implement many of its mission-critical systems. It is doubtful that FAA can adequately do all of this in the time remaining. Accordingly, FAA must determine how to ensure continuity of critical operations in the likely event of some systems’ failures. In October 1997, we reported that while the Social Security Administration (SSA) had made significant progress in assessing and renovating mission-critical mainframe software, certain areas of risk in its Year 2000 program remained. Accordingly, we made several recommendations to address these risk areas, which included the Year 2000 compliance of the systems used by the 54 state Disability Determination Services that help administer the disability programs. SSA agreed with these recommendations and, in July 1998, we reported that actions to implement these recommendations had either been taken or were underway.Further, we found that SSA has maintained its place as a federal leader in addressing Year 2000 issues and has made significant progress in achieving systems compliance. However, essential tasks remain. For example, many of the states’ Disability Determination Service systems still had to be renovated, tested, and deemed Year 2000 compliant. Our work has shown that much likewise remains to be done in DOD and the military services. For example, our recent report on the Navy found that while positive actions have been taken, remediation progress had been slow and the Navy was behind schedule in completing the early phases of its Year 2000 program. Further, the Navy had not been effectively overseeing and managing its Year 2000 efforts and lacked complete and reliable information on its systems and on the status and cost of its remediation activities. We have recommended improvements to DOD’s and the military services’ Year 2000 programs with which they have concurred. In addition to these examples, our reviews have shown that many agencies had not adequately acted to establish priorities, solidify data exchange agreements, or develop contingency plans. Likewise, more attention needs to be devoted to (1) ensuring that the government has a complete and accurate picture of Year 2000 progress, (2) setting governmentwide priorities, (3) ensuring that the government’s critical core business processes are adequately tested, (4) recruiting and retaining information technology personnel with the appropriate skills for Year 2000-related work, and (5) assessing the nation’s Year 2000 risks, including those posed by key economic sectors. I would like to highlight some of these vulnerabilities, and our recommendations made in April 1998 for addressing them. First, governmentwide priorities in fixing systems have not yet been established. These governmentwide priorities need to be based on such criteria as the potential for adverse health and safety effects, adverse financial effects on American citizens, detrimental effects on national security, and adverse economic consequences. Further, while individual agencies have been identifying mission-critical systems, this has not always been done on the basis of a determination of the agency’s most critical operations. If priorities are not clearly set, the government may well end up wasting limited time and resources in fixing systems that have little bearing on the most vital government operations. Other entities have recognized the need to set priorities. For example, Canada has established 48 national priorities covering areas such as national defense, food production, safety, and income security. Second, business continuity and contingency planning across the government has been inadequate. In their May 1998 quarterly reports to OMB, only four agencies reported that they had drafted contingency plans for their core business processes. Without such plans, when unpredicted failures occur, agencies will not have well-defined responses and may not have enough time to develop and test alternatives. Federal agencies depend on data provided by their business partners as well as services provided by the public infrastructure (e.g., power, water, transportation, and voice and data telecommunications). One weak link anywhere in the chain of critical dependencies can cause major disruptions to business operations. Given these interdependencies, it is imperative that contingency plans be developed for all critical core business processes and supporting systems, regardless of whether these systems are owned by the agency. Our recently issued guidance aims to help agencies ensure such continuity of operations through contingency planning. Third, OMB’s assessment of the current status of federal Year 2000 progress is predominantly based on agency reports that have not been consistently reviewed or verified. Without independent reviews, OMB and the President’s Council on Year 2000 Conversion have little assurance that they are receiving accurate information. In fact, we have found cases in which agencies’ systems compliance status as reported to OMB has been inaccurate. For example, the DOD Inspector General estimated that almost three quarters of DOD’s mission-critical systems reported as compliant in November 1997 had not been certified as compliant by DOD components.In May 1998, the Department of Agriculture (USDA) reported 15 systems as compliant, even though these were replacement systems that were still under development or were planned for development. (The department removed these systems from compliant status in its August 1998 quarterly report.) Fourth, end-to-end testing responsibilities have not yet been defined. To ensure that their mission-critical systems can reliably exchange data with other systems and that they are protected from errors that can be introduced by external systems, agencies must perform end-to-end testing for their critical core business processes. The purpose of end-to-end testing is to verify that a defined set of interrelated systems, which collectively support an organizational core business area or function, will work as intended in an operational environment. In the case of the year 2000, many systems in the end-to-end chain will have been modified or replaced. As a result, the scope and complexity of testing—and its importance—is dramatically increased, as is the difficulty of isolating, identifying, and correcting problems. Consequently, agencies must work early and continually with their data exchange partners to plan and execute effective end-to-end tests. So far, lead agencies have not been designated to take responsibility for ensuring that end-to-end testing of processes and supporting systems is performed across boundaries, and that independent verification and validation of such testing is ensured. We have set forth a structured approach to testing in our recently released exposure draft. In our April 1998 report on governmentwide Year 2000 progress, we made a number of recommendations to the Chair of the President’s Council on Year 2000 Conversion aimed at addressing these problems. These included establishing governmentwide priorities and ensuring that agencies set developing a comprehensive picture of the nation’s Year 2000 readiness, requiring agencies to develop contingency plans for all critical core requiring agencies to develop an independent verification strategy to involve inspectors general or other independent organizations in reviewing Year 2000 progress, and designating lead agencies responsible for ensuring that end-to-end operational testing of processes and supporting systems is performed. We are encouraged by actions the Council is taking in response to some of our recommendations. For example, OMB and the Chief Information Officers Council adopted our guide providing information on business continuity and contingency planning issues common to most large enterprises as a model for federal agencies. However, as we recently testified before this Subcommittee, some actions have not been fully addressed—principally with respect to setting national priorities and end-to-end testing. State and local governments also face a major risk of Year 2000-induced failures to the many vital services—such as benefits payments, transportation, and public safety—that they provide. For example, food stamps and other types of payments may not be made or could be made for incorrect amounts; date-dependent signal timing patterns could be incorrectly implemented at highway intersections, and safety severely compromised, if traffic signal systems run by state and local governments do not process four-digit years correctly; and criminal records (i.e., prisoner release or parole eligibility determinations) may be adversely affected by the Year 2000 problem. Recent surveys of state Year 2000 efforts have indicated that much remains to be completed. For example, a July 1998 survey of state Year 2000 readiness conducted by the National Association of State Information Resource Executives, Inc., found that only about one-third of the states reported that 50 percent or more of their critical systems had been completely assessed, remediated, and tested. In a June 1998 survey conducted by the USDA’s Food and Nutrition Service, only 3 and 14 states, respectively, reported that the software, hardware, and telecommunications that support the Food Stamp Program, and the Women, Infants, and Children program, were Year 2000 compliant. Although all but one of the states reported that they would be Year 2000 compliant by January 1, 2000, many of the states reported that their systems are not due to be compliant until after March 1999 (the federal government’s Year 2000 implementation goal). Indeed, 4 and 5 states, respectively, reported that the software, hardware, and telecommunications supporting the Food Stamp Program, and the Women, Infants, and Children program would not be Year 2000 compliant until the last quarter of calendar year 1999, which puts them at high risk of failure due to the need for extensive testing. State audit organizations have identified other significant Year 2000 concerns. For example, (1) Illinois’ Office of the Auditor General reported that significant future efforts were needed to ensure that the year 2000 would not adversely affect state government operations, (2) Vermont’s Office of Auditor of Accounts reported that the state faces the risk that critical portions of its Year 2000 compliance efforts could fail, (3) Texas’ Office of the State Auditor reported that many state entities had not finished their embedded systems inventories and, therefore, it is not likely that they will complete their embedded systems repairs before the year 2000, and (4) Florida’s Auditor General has issued several reports detailing the need for additional Year 2000 planning at various district school boards and community colleges. State audit offices have also made recommendations, including the need for increased oversight, Year 2000 project plans, contingency plans, and personnel recruitment and retention strategies. In the course of these field hearings, states and municipalities have testified about Year 2000 practices that could be adopted by others. For example: New York established a “top 40” list of priority systems having a direct impact on public health, safety, and welfare, such as systems that support child welfare, state aid to schools, criminal history, inmate population management, and tax processing. According to New York, “the Top 40 systems must be compliant, no matter what.” The city of Lubbock, Texas, is planning a Year 2000 “drill” this month. To prepare for the drill, Lubbock is developing scenarios of possible Year 2000-induced failures, as well as more normal problems (such as inclement weather) that could occur at the change of century. Louisiana established a $5 million Year 2000 funding pool to assist agencies experiencing emergency circumstances in mission-critical applications and that are unable to correct the problems with existing resources. Regarding Illinois, according to the state’s Year 2000 Internet World Wide Web site, it had created a repository of information on vendor claims regarding the Year 2000 compliance of software packages in use by various state agencies. In addition, Illinois’ Treasurer’s Office announced in July 1998 the creation of a Year 2000 Initiative task force composed of public and private officials from 10 regions in the state. This task force is charged with monitoring the progress of all financial vendors doing business with Illinois. To fully address the Year 2000 risks that states and the federal government face, data exchanges must also be confronted—a monumental issue. As computers play an ever-increasing role in our society, exchanging data electronically has become a common method of transferring information among federal, state, and local governments. For example, SSA exchanges data files with the states to determine the eligibility of disabled persons for disability benefits. In another example, the National Highway Traffic Safety Administration provides states with information needed for driver registrations. As computer systems are converted to process Year 2000 dates, the associated data exchanges must also be made Year 2000 compliant. If the data exchanges are not Year 2000 compliant, data will not be exchanged or invalid data could cause the receiving computer systems to malfunction or produce inaccurate computations. Our recent report on actions that have been taken to address Year 2000 issues for electronic data exchanges revealed that federal agencies and the states use thousands of such exchanges to communicate with each other and other entities. For example, federal agencies reported that their mission-critical systems have almost 500,000 data exchanges with other federal agencies, states, local governments, and the private sector. To successfully remediate their data exchanges, federal agencies and the states must (1) assess information systems to identify data exchanges that are not Year 2000 compliant, (2) contact exchange partners and reach agreement on the date format to be used in the exchange, (3) determine if data bridges and filters are needed and, if so, reach agreement on their development, (4) develop and test such bridges and filters, (5) test and implement new exchange formats, and (6) develop contingency plans and procedures for data exchanges. At the time of our review, much work remained to ensure that federal and state data exchanges will be Year 2000 compliant. About half of the federal agencies reported during the first quarter of 1998 that they had not yet finished assessing their data exchanges. Moreover, almost half of the federal agencies reported that they had reached agreements on 10 percent or fewer of their exchanges, few federal agencies reported having installed bridges or filters, and only 38 percent of the agencies reported that they had developed contingency plans for data exchanges. Further, the status of the data exchange efforts of 15 of the 39 state-level organizations that responded to our survey was not discernable because they were not able to provide us with information on their total number of exchanges and the number assessed. Of the 24 state-level organizations that provided actual or estimated data, they reported, on average, that 47 percent of the exchanges had not been assessed. In addition, similar to the federal agencies, state-level organizations reported having made limited progress in reaching agreements with exchange partners, installing bridges and filters, and developing contingency plans. However, we could draw only limited conclusions on the status of the states’ actions because data were provided on only a small portion of states’ data exchanges. To strengthen efforts to address data exchanges, we made several recommendations to OMB. In response, OMB agreed that it needed to increase its efforts in this area. For example, OMB noted that federal agencies had provided the General Services Administration with a list of their data exchanges with the states. In addition, as a result of an agreement reached at an April 1998 federal/state data exchange meeting,the states were supposed to verify the accuracy of these initial lists by June 1, 1998. OMB also noted that the General Services Administration is planning to collect and post information on its Internet World Wide Web site on the progress of federal agencies and states in implementing Year 2000 compliant data exchanges. In summary, federal, state, and local efforts must increase substantially to ensure that major service disruptions do not occur. Greater leadership and partnerships are essential if government programs are to meet the needs of the public at the turn of the century. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions that you or other members of the Subcommittee may have at this time. FAA Systems: Serious Challenges Remain in Resolving Year 2000 and Computer Security Problems (GAO/T-AIMD-98-251, August 6, 1998). Year 2000 Computing Crisis: Business Continuity and Contingency Planning (GAO/AIMD-10.1.19, August 1998). Internal Revenue Service: Impact of the IRS Restructuring and Reform Act on Year 2000 Efforts (GAO/GGD-98-158R, August 4, 1998). Social Security Administration: Subcommittee Questions Concerning Information Technology Challenges Facing the Commissioner (GAO/AIMD-98-235R, July 10, 1998). Year 2000 Computing Crisis: Actions Needed on Electronic Data Exchanges (GAO/AIMD-98-124, July 1, 1998). Defense Computers: Year 2000 Computer Problems Put Navy Operations at Risk (GAO/AIMD-98-150, June 30, 1998). Year 2000 Computing Crisis: A Testing Guide (GAO/AIMD-10.1.21, Exposure Draft, June 1998). Year 2000 Computing Crisis: Testing and Other Challenges Confronting Federal Agencies (GAO/T-AIMD-98-218, June 22, 1998). Year 2000 Computing Crisis: Telecommunications Readiness Critical, Yet Overall Status Largely Unknown (GAO/T-AIMD-98-212, June 16, 1998). GAO Views on Year 2000 Testing Metrics (GAO/AIMD-98-217R, June 16, 1998). IRS’ Year 2000 Efforts: Business Continuity Planning Needed for Potential Year 2000 System Failures (GAO/GGD-98-138, June 15, 1998). Year 2000 Computing Crisis: Actions Must Be Taken Now to Address Slow Pace of Federal Progress (GAO/T-AIMD-98-205, June 10, 1998). Defense Computers: Army Needs to Greatly Strengthen Its Year 2000 Program (GAO/AIMD-98-53, May 29, 1998). Year 2000 Computing Crisis: USDA Faces Tremendous Challenges in Ensuring That Vital Public Services Are Not Disrupted (GAO/T-AIMD-98-167, May 14, 1998). Securities Pricing: Actions Needed for Conversion to Decimals (GAO/T-GGD-98-121, May 8, 1998). Year 2000 Computing Crisis: Continuing Risks of Disruption to Social Security, Medicare, and Treasury Programs (GAO/T-AIMD-98-161, May 7, 1998). IRS’ Year 2000 Efforts: Status and Risks (GAO/T-GGD-98-123, May 7, 1998). Air Traffic Control: FAA Plans to Replace Its Host Computer System Because Future Availability Cannot Be Assured (GAO/AIMD-98-138R, May 1, 1998). Year 2000 Computing Crisis: Potential for Widespread Disruption Calls for Strong Leadership and Partnerships (GAO/AIMD-98-85, April 30, 1998). Defense Computers: Year 2000 Computer Problems Threaten DOD Operations (GAO/AIMD-98-72, April 30, 1998). Department of the Interior: Year 2000 Computing Crisis Presents Risk of Disruption to Key Operations (GAO/T-AIMD-98-149, April 22, 1998). Tax Administration: IRS’ Fiscal Year 1999 Budget Request and Fiscal Year 1998 Filing Season (GAO/T-GGD/AIMD-98-114, March 31, 1998). Year 2000 Computing Crisis: Strong Leadership Needed to Avoid Disruption of Essential Services (GAO/T-AIMD-98-117, March 24, 1998). Year 2000 Computing Crisis: Federal Regulatory Efforts to Ensure Financial Institution Systems Are Year 2000 Compliant (GAO/T-AIMD-98-116, March 24, 1998). Year 2000 Computing Crisis: Office of Thrift Supervision’s Efforts to Ensure Thrift Systems Are Year 2000 Compliant (GAO/T-AIMD-98-102, March 18, 1998). Year 2000 Computing Crisis: Strong Leadership and Effective Public/Private Cooperation Needed to Avoid Major Disruptions (GAO/T-AIMD-98-101, March 18, 1998). Post-Hearing Questions on the Federal Deposit Insurance Corporation’s Year 2000 (Y2K) Preparedness (AIMD-98-108R, March 18, 1998). SEC Year 2000 Report: Future Reports Could Provide More Detailed Information (GAO/GGD/AIMD-98-51, March 6, 1998). Year 2000 Readiness: NRC’s Proposed Approach Regarding Nuclear Powerplants (GAO/AIMD-98-90R, March 6, 1998). Year 2000 Computing Crisis: Federal Deposit Insurance Corporation’s Efforts to Ensure Bank Systems Are Year 2000 Compliant (GAO/T-AIMD-98-73, February 10, 1998). Year 2000 Computing Crisis: FAA Must Act Quickly to Prevent Systems Failures (GAO/T-AIMD-98-63, February 4, 1998). FAA Computer Systems: Limited Progress on Year 2000 Issue Increases Risk Dramatically (GAO/AIMD-98-45, January 30, 1998). Defense Computers: Air Force Needs to Strengthen Year 2000 Oversight (GAO/AIMD-98-35, January 16, 1998). Year 2000 Computing Crisis: Actions Needed to Address Credit Union Systems’ Year 2000 Problem (GAO/AIMD-98-48, January 7, 1998). Veterans Health Administration Facility Systems: Some Progress Made In Ensuring Year 2000 Compliance, But Challenges Remain (GAO/AIMD-98-31R, November 7, 1997). Year 2000 Computing Crisis: National Credit Union Administration’s Efforts to Ensure Credit Union Systems Are Year 2000 Compliant (GAO/T-AIMD-98-20, October 22, 1997). Social Security Administration: Significant Progress Made in Year 2000 Effort, But Key Risks Remain (GAO/AIMD-98-6, October 22, 1997). Defense Computers: Technical Support Is Key to Naval Supply Year 2000 Success (GAO/AIMD-98-7R, October 21, 1997). Defense Computers: LSSC Needs to Confront Significant Year 2000 Issues (GAO/AIMD-97-149, September 26, 1997). Veterans Affairs Computer Systems: Action Underway Yet Much Work Remains To Resolve Year 2000 Crisis (GAO/T-AIMD-97-174, September 25, 1997). Year 2000 Computing Crisis: Success Depends Upon Strong Management and Structured Approach (GAO/T-AIMD-97-173, September 25, 1997). Year 2000 Computing Crisis: An Assessment Guide (GAO/AIMD-10.1.14, September 1997). Defense Computers: SSG Needs to Sustain Year 2000 Progress (GAO/AIMD-97-120R, August 19, 1997). Defense Computers: Improvements to DOD Systems Inventory Needed for Year 2000 Effort (GAO/AIMD-97-112, August 13, 1997). Defense Computers: Issues Confronting DLA in Addressing Year 2000 Problems (GAO/AIMD-97-106, August 12, 1997). Defense Computers: DFAS Faces Challenges in Solving the Year 2000 Problem (GAO/AIMD-97-117, August 11, 1997). Year 2000 Computing Crisis: Time Is Running Out for Federal Agencies to Prepare for the New Millennium (GAO/T-AIMD-97-129, July 10, 1997). Veterans Benefits Computer Systems: Uninterrupted Delivery of Benefits Depends on Timely Correction of Year-2000 Problems (GAO/T-AIMD-97-114, June 26, 1997). Veterans Benefits Computer Systems: Risks of VBA’s Year-2000 Efforts (GAO/AIMD-97-79, May 30, 1997). Medicare Transaction System: Success Depends Upon Correcting Critical Managerial and Technical Weaknesses (GAO/AIMD-97-78, May 16, 1997). Medicare Transaction System: Serious Managerial and Technical Weaknesses Threaten Modernization (GAO/T-AIMD-97-91, May 16, 1997). Year 2000 Computing Crisis: Risk of Serious Disruption to Essential Government Functions Calls for Agency Action Now (GAO/T-AIMD-97-52, February 27, 1997). Year 2000 Computing Crisis: Strong Leadership Today Needed To Prevent Future Disruption of Government Services (GAO/T-AIMD-97-51, February 24, 1997). High-Risk Series: Information Management and Technology (GAO/HR-97-9, February 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the year 2000 computer system risks facing the nation, focusing on: (1) GAO's major concerns with the federal government's progress in correcting its systems; (2) state and local government year 2000 issues; and (3) critical year 2000 data exchange issues. GAO noted that: (1) the public faces a high risk that critical services provided by the government and the private sector could be severely disrupted by the year 2000 computing crisis; (2) the year 2000 could cause problems for the many facilities used by the federal government that were built or renovated within the last 20 years and contain embedded computer systems to control, monitor, or assist in operations; (3) overall, the government's 24 major departments and agencies are making slow progress in fixing their systems; (4) in May 1997, the Office of Management and Budget (OMB) reported that about 21 percent of the mission-critical systems for these departments and agencies were year 2000 compliant; (5) in May 1998, these departments reported that 40 percent of the mission-critical systems were year 2000 compliant; (6) unless progress improves dramatically, a substantial number of mission-critical systems will not be compliant in time; (7) in addition to slow governmentwide progress in fixing systems, GAO's reviews of federal agency year 2000 programs have found uneven progress; (8) some agencies are significantly behind schedule and are at high risk that they will not fix their systems in time; (9) other agencies have made progress, although risks continue and a great deal of work remains; (10) governmentwide priorities in fixing systems have not yet been established; (11) these governmentwide priorities need to be based on such criteria as the potential for adverse health and safety effects, adverse financial effects on American citizens, detrimental effects on national security, and adverse economic consequences; (12) business continuity and contingency planning across the government has been inadequate; (13) in their May 1998 quarterly reports to OMB, only four agencies reported that they had drafted contingency plans for their core business processes; (14) OMB's assessment of the status of federal year 2000 progress is predominantly based on agency reports that have not been consistently reviewed or verified; (15) GAO found cases in which agencies' systems compliance status as reported to OMB had been inaccurate; (16) end-to-end testing responsibilities have not yet been identified; (17) state and local governments also face a major risk of year 2000-induced failures to the many vital services that they provide; (18) recent surveys of state year 2000 efforts have indicated that much remains to be completed; and (19) at the time of GAO's review, much work remained to ensure that federal and state data exchanges will be year 2000 compliant. |
Actions can also include compliance and monitoring, such as reviewing disclosures by exporters of possible export control violations, prelicense checks, and postshipment verifications. See GAO, Export Controls: Post-Shipment Verification Provides Limited Assurance That Dual-use Items Are Being Properly Used, GAO-04-357 (Washington, D.C.: Jan. 12, 2004); and Defense Trade: Arms Export Control System in the Post 9/11 Environment, GAO-05-234 (Washington, D.C.: Feb. 16, 2005). investigate, and take punitive action against potential violators of U.S. export control laws. These authorities provide the Federal Bureau of Investigation (FBI) and Immigration and customs Enforcement (ICE) with overlapping jurisdiction to investigate defense potential violations, and FBI, ICE, and Commerce’s Office of Export Enforcement (OEE) with overlapping jurisdiction to investigate dual-use potential violations. Inspections of items scheduled for export are routinely conducted at U.S. air, sea, and land ports, as part of the U.S. Customs and Border Protection (CBP) officer’s responsibilities for enforcing U.S. import and export control laws and regulations at our nation’s ports of entry. CBP’s enforcement activities include inspection of outbound cargo through a risk-based approach using CBP’s automated targeting systems to assess the risk of each shipment, review and validation of documentation presented for licensable items, detention of questionable shipments, and seizure of shipments and issuance of monetary penalties for items that are found to be in violation of U.S. export control laws. According to CBP officials, almost 3 million shipments per month are exported from the United States. Investigations of potential violations of export control laws for dual-use items are conducted by agents from OEE, ICE, and FBI. Investigations of potential export control violations involving defense items are conducted by ICE and FBI agents. OEE and ICE are authorized to investigate potential violations of dual-use items. ICE is also authorized to investigate potential violations of defense items. The FBI has authority to investigate any criminal violation of law not exclusively assigned to another agency, and is mandated to investigate and oversee export control violations with a counterintelligence concern. The investigative agencies have various tools for investigating potential violations (see table 2) and establishing cases for potential criminal or administrative punitive actions. Punitive actions, which are either criminal or administrative, are taken against violators of export control laws and regulations, and may involve U.S. or foreign individuals and companies. Criminal violations are those cases where the evidence shows that the exporter willfully violated export control laws. U.S. Attorneys’ Offices prosecute export control enforcement criminal cases in consultation with Justice’s National Security Division. These cases can result in imprisonment, fines, forfeitures, and other penalties. Punitive actions for administrative violations can include fines, suspension of an export license, or denial or debarment from exporting, and are imposed primarily by State or Commerce, depending on whether the violation involves the export of a defense or a dual-use item. For example, Commerce can impose the administrative sanction of placing parties acting contrary to the national security or foreign policy interests of the United State on a list that prevents their receipt of items subject to Commerce controls. The Treasury’s Office of Foreign Assets Control (OFAC) administers and enforces economic sanctions programs primarily against countries and groups of individuals, such as terrorists and narcotics traffickers. The sanctions can be either comprehensive or selective, using the blocking of assets and trade restrictions to accomplish foreign policy and national security goals. In some cases, both criminal and administrative penalties can be levied against an export control violator. In fiscal year 2010, Justice data showed that 56 individuals or companies were convicted of criminal violations of export control laws.and Commerce reported more than $25.4 million in administrative fines and penalties for fiscal year 2010. In 2011, over a third of the major U.S. export control enforcement and embargo-related criminal prosecutions involved the illegal transfer of U.S. military, nuclear, or technical data to Iran and China. Agencies use some form of a risk-based approach when allocating resources to export control enforcement as their missions are broader than export controls. As agencies can use these resources for other activities based on need, tracking resources used solely on export control enforcement activities is difficult. Only OEE allocates all of its resources exclusively to export control enforcement as that is its primary mission, and State and the Treasury have relatively few export control enforcement staff to track. Agencies’ risk-based resource allocation approach incorporates a variety of information, including workload and threat assessment data, but has not generally included data on resources used for export control enforcement activities as agencies did not implement systems to fully track this information until recently. Given the overlapping jurisdiction of several enforcement agencies, in some cities agencies have voluntarily created local task forces that bring together enforcement resources to work collectively on cases—informally leveraging resources. Agencies determine their missions based on statutes, policy, and directives, and articulate their fundamental mission in their strategic plans.with senior agency officials, agencies with primary export control enforcement responsibility have multiple missions that extend beyond export controls as shown in table 3, except for OEE. As such, these agencies are faced with balancing multiple priorities when allocating staff resources. Based on our review of these documents as well as discussions enforcement, and as such, is the only agency that has been able to fully track the resources used on these activities. To formulate its budget and allocate its investigators, OEE conducts threat assessments with a priority related to weapons of mass destruction, terrorism, and unauthorized military use; and analyzes export control enforcement case workload, including the prior year’s investigative statistics of arrests, indictments, and convictions. OEE also recently completed a field office expansion study to decide which cities would be the best locations for additional OEE field offices. In this study, OEE considered the volume of licensed and unlicensed exports and the type of high-tech items exported from different areas of the United States, and concluded that Atlanta, GA; Cincinnati, OH; Phoenix, AZ; and Portland, OR, were optimal locations, but has not received budget approval for expansion. CBP reemphasized outbound operations in the creation of its Outbound Enforcement Division in March 2009 to help prevent terrorist groups, rogue nations, and other criminal organizations from obtaining defense and dual-use commodities; enforce sanctions and trade embargoes; and increase exporter compliance. CBP determines the number of staff to allocate to outbound inspections through a risk- based approach based on prior workload and a quarterly threat matrix—which includes the volume of outbound cargo and passengers, port threat assessments, and the numbers and types of seizures and arrests at the ports for items such as firearms and currency. As of fiscal year 2010, CBP had allocated approximately 660 officers for outbound enforcement activities, but these officers can be used for other than export control-related activities at any time, when needed. For example, the Port of Baltimore has officers assigned to perform outbound activities at both the airport and seaport, some of which focus on the enforcement of controlled shipments in the seaport environment. According to the Port Director, any of these officers can be redirected at any time and often are assigned to the airport during the busy airline arrival times, to perform inbound inspection duties—based on priorities. Further, CBP does not track the hours that its officers across the country spend on export control enforcement activities, but is in the process of implementing a system to do so. CBP officials stated that determining the right mix of officers is complex and changes to its tracking system should allow for better planning and accounting for resources used for outbound activities in the future. ICE’s Homeland Security Investigations, Counter-Proliferation Investigations Unit focuses on preventing sensitive U.S. technologies and weapons from reaching the hands of adversaries and conducts export control investigations. To determine how many investigators it should allocate to this unit, ICE uses information including operational threat assessments and case data from the previous year, by field office, on total numbers of arrests, indictments, convictions, seizures, and investigative hours expended on export control investigations. For example, it assigns a tier level for each of its 70 field offices, based on threat assessments—ranging from 1 for the highest threat, resulting in a larger number of agents assigned to these offices; to 5 for the lowest threat, with a lower number of agents assigned. To further prioritize resources, in 2010, ICE established Counter Proliferation Investigations Centers in selected cities throughout the United States, with staff focused solely on combating illegal exports and illicit procurement networks seeking to acquire vital U.S. technology. ICE concluded that it needed to form these centers to combat the specialized nature of complex export control cases and determined that its previous method of distributing resources needed refinement, noting that some ICE field office managers had difficulty in balancing numerous competing programmatic priorities and initiatives. According to ICE officials, they plan to mitigate these concerns by having staff and facilities focused solely on export control enforcement cases, which will allow ICE to track and use this information to better determine future resource needs. The FBI, with both an investigative and intelligence mission, does not allocate resources solely for export control enforcement and officials told us they view these activities as a tool to gain intelligence that may lead to more robust cases. Nevertheless, cases involving export controls are primarily led by agents within the Counterintelligence Division. To determine the number of agents to allocate to this division, the FBI uses a risk management process and threat assessments. Several years ago, the FBI established at least one Counterintelligence squad in each of its 56 field offices. In July 2011, the FBI established a Counterproliferation Center, merging its Counterintelligence Division and its Weapons of Mass Destruction Directorate to better focus their efforts and resources. The FBI is in the process of implementing new codes within its resource tracking system to obtain better information on agents’ distribution of work, which will include time spent on investigations of defense and dual- use items. U.S. Attorneys’ Offices have discretion to determine the resources that they will allocate to export control enforcement cases, based on national priorities and the individual priorities of the 94 districts. These priorities include law enforcement concerns for their district and leads from investigative agencies. In response to the risk associated with national security, which includes export control enforcement cases, staffing for national security activities has increased and several districts have created national security sections within their office. In 2008, the Executive Office for U.S. Attorneys provided codes for charging time and labeling cases to obtain better information on the U.S. Attorneys’ Office distribution of work and those resources used for export control enforcement. However, some Assistant U.S. Attorneys told us that the time-keeping system is complicated as there are multiple codes and sub-categories in the tracking system and determining the correct codes is often subjective, making it difficult to track time spent on export control enforcement cases. Senior agency officials acknowledged this concern and are working with the U.S. Attorneys’ Offices to provide better guidance to improve the accuracy of attorney time charges. Other offices, such as State’s Office of the Legal Adviser for Political- Military Affairs and Commerce’s Office of the Chief Counsel for the Bureau of Industry and Security assist the enforcement agencies by providing legal support. For example, Commerce’s Office of the Chief Counsel pursues administrative enforcement actions against individuals and entities, but also reviews and advises on OEE recommendations for other administrative actions, such as temporary denials of licenses. In addition, DDTC and OFAC pursue administrative enforcement actions against violators. For example, OFAC administers and enforces U.S. economic and trade sanctions against designated foreign countries. While not all of staff in these offices are allocated to export control enforcement, these offices have relatively few staff to track. In addition to a domestic presence, most export control enforcement agencies also allocate resources overseas, but only Commerce allocates resources exclusively to export control enforcement. For example, Commerce maintains Export Control Officers in six locations abroad; Beijing and Hong Kong, China; Abu Dhabi, UAE; New Delhi, India; Moscow, Russia; and Singapore, to support its dual-use export control enforcement activities. Given that these officers have regional responsibilities, they cover additional locations. For example, the Export Control Officer assigned to Singapore also covers Malaysia and Indonesia. While other agencies have field locations in many overseas locations, these resources are to support the agencies’ broader missions and can be used for other duties based on the overseas mission priorities. For example, ICE has 70 offices in 47 foreign countries with more than 380 government and contract personnel which support all ICE enforcement activities, including export control. They can also be called upon to support various other DHS mission priorities. Specifically, the ICE agents we met with at the U.S. Embassy in Abu Dhabi also conduct activities in support of the full DHS mission and a great portion of their time is spent on visa security and a lesser amount on export control enforcement activities. The export control enforcement investigative agencies often have offices located in the same cities or geographic areas. In many of these cities, agencies’ officials said that they informally leverage each others’ tools, authorities, and resources to coordinate investigations and share intelligence through local task forces allowing them to use resources more efficiently and avoid duplicating efforts or interfering with each other’s cases. In 2007, Justice’s National Export Enforcement Initiative encouraged local field offices with a significant export control threat to create task forces or other alternatives to coordinate enforcement efforts in their area. Since then, almost 20 U.S. Attorneys’ Offices have created task forces of their own initiative or in conjunction with another enforcement agency, primarily in cities where these agencies are co- located to facilitate the investigation and prosecution of export control cases. Figure 1 shows the location of investigative agencies’ major field offices, as well as the location of export control enforcement task forces. Most of the task force members we met with in Baltimore, Los Angeles, and San Francisco stated that they see benefits beyond the coordination of cases, including investigating cases together and sharing resources. Baltimore’s Counterproliferation Task Force: ICE and the U.S. Attorneys’ Office created this Task Force in 2010 and it has representatives from each of the enforcement agencies located in the area, as well as the defense and intelligence communities. Task force officials stated that they develop and investigate export control cases together and, to enhance interagency collaboration, ICE has supplied work space, allowing agents from other agencies to work side-by-side to pursue leads and conduct investigations. Officials emphasized that the task force enables smaller agencies with fewer resources to leverage the work and expertise of the others to further their investigations and seek prosecutions. Sometimes the task force structures reap benefits that individual agencies cannot reach on their own, as exemplified by the Baltimore Counterproliferation Task Force. Among successes was a Maryland man sentenced to 8 months in prison followed by 3 years of supervised release for illegally exporting export-controlled night vision equipment. Los Angeles’ Export and Anti-proliferation Global Law Enforcement (EAGLE) Task Force: The U.S. Attorney established this Task Force in 2008 as a result of Justice’s counter-proliferation initiatives. Its purpose is to coordinate and develop expertise in export control investigations. Currently, there are over 80 members from 17 Los Angeles-based federal agencies. According to a task force official, the EAGLE task force has resulted in increased priority on export control investigations and improved interagency cooperation since it was established. For instance, the enforcement agencies are now more effectively sharing information in their respective databases. A task force official noted that enhanced access to these databases allows agencies to reduce duplication of license determination requests and to easily retrieve information on a particular person or commodity’s history using the search options. Additionally, through the task force structure, ICE and OEE agents have worked together to conduct additional outreach to industry affiliates. San Francisco’s Strategic Technology Task Force: According to officials, this task force was formed by FBI in 2004, with a primary focus on conducting joint export control outreach activities to academia and industry with the other investigative agencies (ICE and OEE). This task force also includes participation by the military service intelligence units and other law enforcement agencies. FBI task force leaders stated that this task force has helped to coordinate outreach activities as well as to generate investigative leads. According to an agent from the FBI’s San Jose field office, that office has a performance goal to conduct 90 percent of their export control-related investigations jointly with investigative agencies at ICE and Commerce. Although successful cases of joint collaboration among agencies can yield positive enforcement outcomes, as reported by the offices in the three cities we visited, the extent to which these alliances are effective is primarily dependent on personal dynamics of a given region, agency, and law enforcement culture. In addition, these local agency task forces for export control enforcement vary in structure, are voluntary, and do not exist nationwide. For example, while multiple investigative agencies have local offices in Chicago and Dallas with export control enforcement agents, agencies do not have a local task force in these cities to regularly coordinate on export control cases. While agency officials shared examples of agencies informally leveraging each other’s resources, officials told us that they do not factor in such resources when planning their own agency allocations for a variety of reasons, including each agency’s separate budgets and missions, which do not generally consider those of other agencies. Enforcement agencies face several challenges in investigating illicit transshipments, both domestically and overseas—including license determination delays; limited access in some overseas locations; and a lack of effectiveness measures that reflect the complexity and qualitative benefits of export control cases. Recognizing broader challenges in export control enforcement, the President announced the creation of a national export enforcement coordination center, which may help agencies address some of the challenges described below, but detailed plans to do so have yet to be developed. The current export control enforcement system poses several challenges that potentially reduce the effectiveness of activities and limit the identification and investigation of illicit transshipments. Export control enforcement agencies seek to keep defense and dual-use items from being illegally exported through intermediary countries or locations to an unauthorized final destination, such as Iran, but agencies face challenges that can impact their ability to investigate export control violations, both domestically and overseas. First, license determinations—which confirm whether an item is controlled and requires a license, and thereby help confirm whether an export control violation has occurred—can sometimes be delayed, potentially hindering investigations and prosecutions. Second, investigators have limited access to secure communications and cleared staff in several domestic field offices, which can limit their ability to share timely and important information. Third, agencies have limited access to ports and facilities overseas. Fourth, agencies lack consistent data to quantify and identify trends and patterns in illicit transshipments of U.S. export-controlled items. Lastly, investigative agencies lack measures of effectiveness that fully reflect the complexity and qualitative benefits of export control cases. License Determination Delays. To confirm whether a defense or dual-use item is controlled and requires a license, inspectors, investigators, and prosecutors request license determinations from the licensing agencies of State and Commerce. These license determinations are integral to enforcement agencies’ ability to seize items, pursue investigations, or seek prosecutions. DHS’s Exodus Command Center operates the Exodus Accountability Referral System—an ICE database that initiates, tracks, and manages enforcement agency requests for license determinations from the licensing agencies.identifies three different levels of license determinations: initial (to seize an item or begin an investigation), pre-trial (to obtain a search warrant, among other things), and trial (to be used during trial proceedings). The Exodus Command Center has established internal timeliness goals for receiving responses to requests for initial determinations within 3 days; pre-trial certifications within 45 days; and trial certifications within 30 days. However, as shown in table 5, these goals are often not met, which can create barriers for enforcement agencies in seizing shipments before they depart the United States; obtaining search warrants; and making timely arrests. Given the wide-ranging mission of most of the agencies involved in export control enforcement, it is essential that agencies track resources expended on export control inspections, investigations, and prosecutions to assess how these resources are contributing to fulfilling their missions and are focused on the highest priorities in export control enforcement. While agencies, such as DHS and Justice, have recognized the need to better track their resources, a more comprehensive approach, including enhanced measures of effectiveness, could help these and other enforcement agencies assess workload and efficiency in making resource allocations and in determining whether changes are warranted. The creation of the Export Enforcement Coordination Center presents such an opportunity for the entire export control enforcement community. The center has the potential to become more than a co-location of enforcement agencies, but can be a conduit to more effectively manage export control resources. As the center’s operation progresses, it has the opportunity to address ongoing challenges in export control enforcement, including reducing potential overlap in investigations, and help agencies to work as efficiently as possible, maximize available intelligence and agency investigative data, and measure the effectiveness of U.S. export control enforcement activities. Challenges presented by delays in license determinations can affect the inspection, investigation, and prosecution of export control cases but may be outside of the mission of the center since they primarily involve the licensing agencies. Having goals for processing license determinations can help establish transparency and accountability in the process. Given that the licensing agencies and the Exodus Command Center have not agreed to timeliness goals for responding to such requests, these agencies may benefit from collaborating to help improve the effectiveness of the process. To better inform management and resource allocation decisions, effectively manage limited export control enforcement resources, and improve the license determination process, we are making the following four recommendations: We recommend that the Secretary of Homeland Security and the Attorney General, as they implement efforts to track resources expended on export control enforcement activities, use such data to make resource allocation decisions. We recommend that the Secretaries of Commerce and Homeland Security as they develop and implement qualitative measures of effectiveness, ensure that these assess progress towards their overall goal of preventing or deterring illegal exports. We recommend that the Secretary of Homeland Security, in consultation with the departmental representatives of the Export Enforcement Coordination Center, including Commerce, Justice, State, and the Treasury leverage export control enforcement resources across agencies by building on existing agency efforts to track resources expended, as well as existing agency coordination at the local level; establish procedures to facilitate data sharing between the enforcement agencies and intelligence community to measure illicit transshipment activity; and develop qualitative and quantitative measures of effectiveness for the entire enforcement community to baseline and trend this data. We recommend that the Secretaries of Commerce and State, in consultation with the Secretary of Homeland Security, the Attorney General, and other agencies as appropriate, establish agreed upon timeliness goals for responding to license determination requests considering agency resources, the level of determination, the complexity of the request, and other associated factors. We provided a draft copy of this report to Commerce, DHS, DOD, Justice, State, and Treasury for their review and comment. Commerce, DHS, Justice, and State concurred with the report’s recommendations and, along with DOD, provided technical comments which we incorporated as appropriate. Treasury did not provide any comments on the report. As multiple agencies have responsibilities for export control enforcement, several of our recommendations call for these agencies to work together to effectively manage limited export control enforcement resources and to improve the license determination process. In their comments, Commerce and State agreed to work in consultation with DHS and Justice to establish timeliness goals for license determinations. In its comments, DHS stated its intent to work with the other agencies to improve the license determination process as well as take steps to deploy its resources in the most effective and efficient manner and provided target dates for completing these actions. In particular, DHS noted that ongoing tracking efforts by CBP and ICE will be used to improve their knowledge of resources expended on export control enforcement activities and that they will periodically review this information to determine the overall direction of the export control program. Additionally, DHS stated its intent to establish a working group with other agencies to develop performance measures related to export control enforcement to help estimate the effectiveness of all associated law enforcement activity. Written comments from Commerce, DHS, and State are reprinted in appendixes II, III, and IV, respectively. We are sending copies of this report to interested congressional committees, as well as the Secretaries of Commerce, Defense, Homeland Security, State, and Treasury as well as the Attorney General. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-4841 or martinb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report is listed in appendix V. To determine how agencies allocate staff resources for export control enforcement activities, we interviewed cognizant officials and examined relevant documents such as agencies’ budgets, strategic plans, memorandum, and other documentation on resources. We interviewed officials about their resources at the headquarters of Commerce, DHS, Justice, State, and the Treasury. We also discussed with DOD officials their role in providing investigative support to agencies responsible for export control enforcement. We developed and used a set of structured questions to interview each agency’s resource planners to determine how they allocate resources, what information and factors they consider in resource allocation decisions, what their enforcement priorities are, whether they track resources expended on enforcement, if they had conducted an analysis of their resource need, and if they consider or leverage other agencies’ resources. We obtained applicable criteria including the Office of Management and Budget Circular A-11 and departmental guidance on resource allocation and tracking. We also reviewed previous GAO and inspector general reports regarding the Government Performance and Results Act (GPRA), as amended, and resource management for enforcement programs. To determine current resource levels, we obtained geographic locations of all domestic staff conducting export control enforcement, actual expenditures on export control enforcement activities, and information on staffing levels from each agency for fiscal years 2006 through 2010. We did not independently verify the accuracy of agency information on expenditures and staffing levels obtained, but we corroborated this information with cognizant agency officials. We considered agencies’ overall resources for the broad enforcement authorities and the resources allocated to export control enforcement specifically. Finally, we analyzed agencies’ budget requests, expenditures, and staff hours to determine agencies current resource commitment and how agencies have allocated resources to export control enforcement activities. To determine challenges that agencies face in investigating illicit transshipments and the potential impact of export control reform initiatives on enforcement activities, we interviewed cognizant officials, examined and analyzed relevant export control documents and statutes, and conducted sites visits both domestically and overseas. We interviewed officials about their enforcement priorities at the headquarters of Commerce, DHS, Justice, and State. We also discussed with DOD officials their role in providing license determination support to agencies responsible for export control enforcement. We developed and used a set of structured questions to interview enforcement agency officials in selected domestic and overseas locations and observed export enforcement operations at those locations that had air, land, and seaports. We selected sites to visit based on various factors, including geographical areas where all enforcement agencies were represented with a large percentage of investigative caseload; areas with a mix of defense and high-tech companies represented; ports with a high volume of trade of U.S. commodities; a large presence of aerospace, electronics, and software industries, and based on headquarters officials’ recommendations on key areas of export control enforcement activities both domestically and abroad. On the basis of these factors, we visited Irvine, Long Beach, Los Angeles, Oakland, San Francisco, and San Jose, CA; Washington, D.C.; and Baltimore, MD domestically. Internationally, we interviewed United States Embassy and Consulate officials and host government authorities in Hong Kong, Singapore, and in Abu Dhabi and Dubai in the United Arab Emirates (UAE). We received briefings on the export control systems from the Hong Kong Government’s Trade and Industry Department, Customs and Excise Tax Department, from Singapore’s Ministry of Foreign Affairs, Singapore’s Immigration and Customs Authority; as well as toured ports at these locations. We also received a briefing from the Hong Kong Customs Airport Command on air cargo and air-to-air transshipment of strategic commodities and visited the DHL Hub at the Hong Kong International Airport. In the UAE, we visited the Government of Sharjah, Department of Seaports & Customs, Hamriyah Free Zone Authority and met with the Director and Security and Safety Manager to discuss the Hamriyah Free Zone. We reviewed the findings and recommendations of past GAO reports, documentation from enforcement agencies, and interviewed U.S. government officials from these agencies as well as their field offices. We also met with several agency representatives of the Export Control Reform Task Force and reviewed recent White House press releases on the export reform initiatives. Further, we examined Federal Register notices on changing regulations related to the export control reform initiative. We conducted this performance audit from February 2011 through March 2012, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Belva Martin, (202) 512-4841 or martinb@gao.gov. In addition to the contact names above, John Neumann, Assistant Director; Lisa Gardner; Desiree Cunningham; Jungjin Park; Marie Ahearn; Roxanna Sun; Robert Swierczek; and Hai Tran made key contributions to this report. | The U.S. government controls the export of sensitive defense and dual-use items (having both military and commercial use). The five agencies primarily responsible for export control enforcementthe Departments of Commerce, Homeland Security (DHS), Justice, State and the Treasuryconduct inspections and investigations, and can levy punitive actions against violators. A challenging aspect of export control enforcement is the detection of illicit transshipmentsthe transfer of items from place of origin through an intermediary country to an unauthorized destination, such as Iran. In 2010, the President announced reforms to the U.S. export control system to address weaknesses found by GAO and others. GAO was asked to address how the export control enforcement agencies allocate resources, as well as the challenges they face and the potential impact of export control reform on enforcement activities. GAO reviewed documents and met with enforcement agency officials as well as with U.S. and foreign government and company officials in Hong Kong, Singapore, and the United Arab Emirates, which have a high volume of trade and have been identified as potential hubs for illicit transshipments. Agencies use a risk-based approach, including workload and threat assessment data, to allocate resources, but most do not fully track those used for export control enforcement activities. As their missions are broader than export controls, agencies can use staff resources for other activities based on need, making tracking resources used solely for export control enforcement difficult. Only Commerces Office of Export Enforcement allocates its resources exclusively to export control enforcement as that is its primary mission. Other agencies, such as State and the Treasury, have relatively few export control enforcement staff to track. While several agencies acknowledge the need to better track export enforcement resources and have taken steps to do so, they do not know the full extent of their use of these resources and do not use this information in resource allocation decisions. In some cities, agencies are informally leveraging export enforcement resources through voluntarily created local task forces that bring together enforcement resources to work collectively on export control cases. Enforcement agencies face several challenges in investigating illicit transshipments, both domestically and overseas, which potentially reduce the effectiveness of enforcement activities and limit the identification and investigation of illicit transshipments. These include: License Determination Delay s. License determinationswhich confirm whether an item is controlled and requires a license, and thereby help confirm whether an export control violation has occurredare often not timely, potentially hindering investigations and prosecutions. Limited Secure Communications and Cleared Staff . Investigators have limited access to secure communications and staff with high-level security clearances in several domestic field offices, limiting investigators ability to share timely and important information. Lack of Trend Data on Illicit Transshipments . While there is a good exchange of intelligence between enforcement agencies and the intelligence communityto seize shipments and take other actions against export control violatorsofficials noted that no formal process or means existed for these groups to collectively quantify and identify statistical trends and patterns relating to information on illicit transshipments. Lack of Effectiveness Measures Unique to the Complexity of Export Controls . Investigative agencies lack measures of effectiveness that fully reflect the complexity and qualitative benefits of export control cases. Some of these challenges may be addressed by ongoing export control reform initiatives, but reform presents both opportunities and challenges. Revising the control list could simplify the license determination process, but could also result in the need for increased enforcement activity overseas to validate the recipient of the items as fewer items may require U.S. government approval in advance of shipment. As most staff located overseas have other agency and mission-related priorities, their availability may be limited. The newly created national Export Enforcement Coordination Center is intended to help agencies coordinate their export control enforcement efforts as well as share intelligence and law enforcement information related to these efforts. However, it is unclear whether the center will address all of the challenges GAO found, as detailed plans for its operations are under development. GAO recommends that Commerce, DHS, Justice, and State take steps individually and with other agencies through the national Export Enforcement Coordination Center to better manage export control enforcement resources and improve the license determination process. Agencies agreed with GAOs recommendations. |
This section describes the factors affecting recent LNG activities in the United States, the liquefaction process, DOE’s responsibilities for authorizing export applications, FERC’s responsibilities for authorizing export facilities, and positions of those supporting or opposing the export of LNG. According to the Congressional Research Service, in the early 2000s, natural gas production in the United States was declining as energy demand was increasing and, as recently as the mid-to-late 2000s the United States was projected to be a growing natural gas importer. In addition to four onshore import terminals that were already operational, natural gas companies built five LNG onshore import facilities in the latter half of the 2000s to meet the expected need for natural gas imports. However, technology enhancements improved the extraction of natural gas from shale formations and resulted in a dramatic increase in domestic natural gas production. These technology enhancements allow companies to extract natural gas from shale formations that were previously considered to be inaccessible because traditional techniques did not yield sufficient amounts for economically viable production. According to Energy Information Administration (EIA) data, between 2007 and 2013, domestic natural gas withdrawals increased by 22 percent, driven primarily by increased withdrawals from shale formations. According to EIA, increases in natural gas supplies generally cause prices to drop. Specifically, between 2007 and 2013, the price of natural gas in the United States decreased by nearly 50 percent. As the price of natural gas in the United States declined, prices in Europe and Asia remained considerably higher. In July 2014, FERC estimated that prices of LNG imported to Europe and Asia during August of 2014 would be about 100 and 250 percent higher than prices in the United States, respectively. These price differences have motivated U.S companies to apply to export natural gas. The majority of U.S. trade in natural gas is by pipeline with Canada and Mexico; however, over long distances separated by water, natural gas is generally converted to LNG and transported by specialized tanker ship. To convert natural gas to LNG, companies pretreat the natural gas to remove components that would freeze during the liquefaction process and contaminate the LNG.through a complex system called a liquefaction train that cools the natural gas to -260 degrees Fahrenheit, converting it to a liquid state. This process reduces the volume of the gas by 600 times. Once liquefied, the After the gas is pretreated, it is processed natural gas is stored in large tanks until it is offloaded to a ship for transport. Once the ship reaches its destination, it is offloaded to tanks for storage or converted to natural gas for distribution by pipeline.illustrates some of the common components of an LNG export facility. Under Section 3 of the NGA, the import or export of LNG and the construction or expansion of LNG facilities requires authorization from DOE. In 1984, DOE delegated the responsibility to approve or deny applications for LNG facilities to FERC. Under Section 3, an authorization is to be granted unless DOE finds that approving the export or import is inconsistent with the public interest. According to DOE, Section 3(a) of the NGA creates a rebuttable presumption that a proposed export of natural gas is in the public interest—that is, it places the burden on those opposing an application to demonstrate that an export is inconsistent with the public interest. The NGA also authorizes DOE to attach terms and conditions necessary to protect the public interest. DOE evaluates public interest under Section 3, and can conduct studies or other reviews to support its public interest determination. In the Energy Policy Act of 1992, Congress amended the NGA to require DOE to use a different standard for the review of applications for export to countries with FTAs with the United States (FTA countries). Specifically, under Section 3(c) of the NGA, DOE must treat applications to export LNG to FTA countries as consistent with the public interest, and DOE is to approve these applications without modification or delay. These FTA applications therefore do not require the same public interest review as non-FTA applications. DOE started to receive applications to export LNG in 2010 and, since then, it has approved 37 of 42 applications to export LNG to FTA countries. During this same period, DOE approved 9 (3 final and 6 conditional) of 35 applications to export LNG to non-FTA countries. Most major importers of LNG are non-FTA countries such as Japan and India, among others. As previously mentioned, this report discusses DOE’s process to review applications to export to non-FTA countries. In keeping with its obligation to authorize LNG facility siting and construction under the NGA, FERC reviews applications to construct and operate LNG export facilities. FERC’s review is considered a federal action and subject to the National Environmental Policy Act (NEPA). NEPA requires federal agencies to assess the projected effects of major federal actions that significantly affect the environment. Prior to the NEPA review, the law requires applicants to communicate with FERC for a minimum of 6 months—known as pre-filing—before submitting an application. FERC acts as the lead agency for the environmental review required by NEPA, prepares the NEPA environmental documentation, and coordinates and sets the schedule for all federal authorizations. The outcome of this review is an environmental document, also called the NEPA document, which provides the commissioners with staff’s assessment of the environmental impacts from facility construction and operation. DOE and FERC consider comments from the public during the application review process, and these comments reflect a range of perspectives on the potential benefits or harm from exports. Proponents maintain that LNG exports are consistent with U.S. free trade policies and will provide an economic boon for the United States, resulting in increased employment and an improved trade balance among other things. They assert that the increased availability of natural gas resources will prevent a significant increase in natural gas prices. Opponents have expressed numerous environmental and economic concerns about LNG exports. For example, opponents have expressed concern that exports will increase hydraulic fracturing and its associated environmental effects, as well as increase greenhouse gas emissions from the production and consumption of natural gas.that exports will increase domestic natural gas prices, hurting the public and the growing industrial and manufacturing sectors that are sensitive to natural gas prices. Opponents have also stated that the primary beneficiaries of LNG exports will be a small segment of society involved in natural gas development and trade, and that most segments of society Other opponents have expressed concern will lose economically. Evaluating whether exports of LNG to non-FTA countries are consistent with the public interest is beyond the scope of this report. Since 2010, DOE has granted final approval to 3 applications and conditional approval to 6 others. DOE considers a range of factors to determine whether approving an export application is in the public interest. As of mid-September 2014, DOE has granted 3 final approvals for applications to export LNG, including the Sabine Pass application in 2012 and the Cameron LNG and Carib Energy applications in September 2014. Sabine Pass is the only LNG export facility currently under construction in the United States and is expected to begin operations in late 2015. In August 2011, after DOE conditionally approved exports from Sabine Pass, DOE commissioned a study of the cumulative effects of additional LNG exports on the economy and the public interest.approve any conditional applications during the 16-month period of the DOE did not study. The study was completed in December 2012. Since then, DOE has conditionally approved 7 applications, including the Cameron LNG application that it granted final approval in September 2014 (See fig. 2). DOE also approved the Carib Energy application in September 2014. DOE’s export approvals, as of late August 2014, amount to 10.56 billion cubic feet of natural gas per day in the form of LNG; for comparison, Qatar, the world’s largest exporter of LNG, exported about 10 billion cubic feet per day in 2012. According to DOE, when determining whether approval of an application is in the public interest, DOE focuses on (1) the domestic need for natural gas, (2) whether the proposed export threatens the security of domestic natural gas supplies, and (3) whether the arrangement is consistent with DOE’s policy of promoting market competition along with other factors bearing on the public interest, such as environmental concerns. In passing the NGA, Congress did not define “public interest;” however, in 1984, DOE developed policy guidelines establishing criteria that the agency uses to evaluate applications for natural gas imports. The guidelines stipulate that, among other things, the market—not the government—should determine the price and other contract terms of imported natural gas. In 1999, DOE began applying these guidelines to natural gas exports. DOE’s review of export applications is not a standardized process, according to agency officials; rather, it is a case-by-case deliberation, where each application is considered separately from others. DOE’s review process begins when an applicant submits documentation to DOE requesting permission to export LNG. DOE examines applications one at a time, and it issues a notice of application in the Federal Register to invite persons interested in the application to comment, protest, or Applicants are then given an opportunity to respond to intervene.comments. DOE’s internal review includes an examination of the application and analysis of public interest using public comments and applicant responses, the criteria outlined in its policy guidelines, the NGA, DOE’s study of the effects of additional LNG exports, and past DOE authorizations. As discussed above, the NGA authorizes DOE to attach terms and conditions necessary to protect the public interest. To further inform its public interest review, DOE commissioned the study of the potential effects of additional exports on the economy. Since the study was released in December 2012, DOE has used it to support its public interest review for each of its application approval documents, including referencing the study’s conclusion that LNG exports would have a net positive effect on the economy. After considering the evidence, DOE issues an order denying the application or granting the application on condition of a satisfactory completion of the NEPA review by FERC. DOE includes the reasoning behind its decision in each order. DOE may also modify the request in an order, such as by limiting the approved export amount or duration. Once DOE conditionally approves an application, it does not grant a final approval until it has reviewed FERC’s NEPA document and reconsidered its public interest determination in light of relevant environmental information. Under NEPA, DOE must give appropriate consideration to the environmental effects of its decisions; FERC’s NEPA document provides the basis for this consideration. 79 Fed. Reg. 32261 (June 4, 2014). The change would also supersede the precedence order. According to the DOE notice, DOE could still choose to implement the policy of issuing conditional orders at a later date. According to DOE officials, this change would allow them to use agency resources more efficiently because they would conduct a single review of each application instead of separate reviews for conditional and final approvals. In addition, the proposal would allow projects that are more commercially advanced to be reviewed by DOE once FERC has issued a NEPA document. Since 2010, FERC approved 3 facility applications, including 2 in 2014, and is currently reviewing 14 applications. FERC’s reviews of LNG export facility applications are a multiyear analysis of the potential environmental and safety effects of the facility that involves other federal, state, and local agencies. FERC approved applications to construct and operate the Sabine Pass LNG export facility in April 2012, the Cameron facility in June 2014, and the Freeport facility in July 2014. As of late August 2014, FERC was reviewing 14 applications (See fig. 3). FERC has issued three final NEPA documents in 2014, including for the Cameron and Freeport facilities, and expects to complete one more by the end of 2014. FERC officials said that they could not discuss when the Commission would act on these facility applications. As shown above, FERC’s review of applications to construct LNG export facilities can take 2 to 3 years or more.reviews are lengthy because of the complexity of the facilities and number of permits and reviews required by federal and state law. For example, applicants must model the effects of LNG spills from pipes and storage tanks on areas around the facility under a variety of scenarios. One of the applicants we spoke with said that the number of variables involved in modeling a single scenario could require up to a week of computer processing. FERC’s review process is technically complex and includes the following three phases. Pre-filing. According to FERC officials, the pre-filing phase is intended to allow applicants to communicate freely with FERC staff and stakeholders to identify and resolve issues before the applicant formally files an application with FERC. Under Commission regulations issued pursuant to the Energy Policy Act of 2005, applicants are required to pre-file with FERC a minimum of 6 months before formally filing. The pre-filing phase can vary significantly depending on project specifics; the Freeport and Lake Charles applications were in the pre-filing phase for over 19 months, while the Cameron application was in the pre-filing phase for about 7 months. FERC officials said that the duration of each phase can vary depending on the site specific characteristics of the proposed facility and responsiveness of the applicant to requests for information from FERC. The pre-filing period also involves public outreach by the applicant and FERC, and FERC allows public comments during this period. An applicant completes the pre-filing period when it has submitted the required documentation to FERC and formally filed. This documentation includes a series of 13 resource reports that consist of, among other things, detailed information on project engineering and design, air and water quality, and fish and wildlife, as well as a description of the anticipated environmental effects of the project and proposed mitigation measures. One applicant told us that the resource reports it submitted to FERC consisted of over 12,000 pages. Application review. The application review phase includes FERC’s review of the application and development of the environmental document required by NEPA. FERC officials told us that they start the review phase after an applicant has successfully completed the pre-filing process and submits an application. FERC reviews, among other things, facility engineering plans and safety systems identified by the applicant; environmental effects from the construction and operation of the facility; and, potential alternatives to the proposed project. FERC develops a NEPA document with input from relevant agencies that elect to participate, called cooperating agencies, as well as other stakeholders. FERC officials told us that, depending on the location of the proposed facility and amount of construction, FERC prepares either an environmental impact statement (EIS) or environmental assessment (EA). FERC will prepare an EA if it believes the review will find no significant impact on the environment from the project. For example, FERC prepared an EA for the Sabine Pass facility because the proposed facility was within the footprint of an existing LNG import facility and previously the subject of an EIS. FERC officials told us that the agency generally prepares an EIS for proposed facilities that would extend beyond the footprint of an existing import facility. After an EIS or EA is drafted, FERC solicits comments from federal agencies and the public on the document.those into a final EIS or EA, as necessary. The final EIS or EA will recommend any environmental and safety mitigation measures to be FERC reviews agency and public comments and integrates completed during various stages of the project. FERC staff submits the final NEPA document and other staff analyses to FERC commissioners for consideration. FERC commissioners consider the entire record of the proceeding, including the NEPA document, to determine whether to approve a project. Post-authorization. The post-authorization phase includes FERC oversight of plant construction and operations. After FERC approves a project but before an applicant can start construction, the applicant must develop a plan describing how it will meet any conditions and mitigation measures identified in FERC’s approval. FERC oversees construction and ensures that these conditions are met. The Coast Guard and DOT also oversee construction to ensure compliance with their respective regulations. FERC conducts compliance and site inspections during construction at least every 8 weeks. Following construction, the applicant must receive written authorizations from the Commission to begin operations at the facility. Once the facility is operational, FERC conducts annual inspections and requires semiannual status reports from the facility operator. As the lead agency responsible for the environmental and safety review of LNG export facilities under NEPA, FERC works with federal, state, and local agencies to develop the NEPA document. In some cases, such as with the Corps and DOE, agencies will adopt and use the NEPA document in issuing their respective permits related to the export facility. In addition, FERC regulations require applicants to consult with the appropriate federal, state, and local agencies to ensure that all environmental effects are identified.obtains the appropriate federal permits or consultations with these agencies. Major federal participants in FERC’s LNG facility review include the following: FERC ensures that the applicant Coast Guard. The Coast Guard requires applicants to assess the effects of a new facility on a bordering waterway. The applicant provides the assessment to the Coast Guard for validation and review before filing its FERC application, and the Coast Guard advises FERC on the suitability of the waterway for the LNG marine traffic associated with the facility. The Coast Guard and DOT also assist FERC’s review of safety and security of the facility. PHMSA. PHMSA is an agency within DOT responsible for establishing national policy relating to pipeline safety and hazardous material transportation, including the authority to establish and enforce safety standards for onshore LNG facilities. To assist FERC’s assessment of whether a facility would affect public safety, FERC regulations require applicants to show that their facility design would comply with PHMSA regulations for hazardous liquids vapor dispersion and fires. Applicants submit models of vapor dispersion to FERC, and FERC consults with PHMSA to ensure that the models comply with PHMSA regulations. The Corps. Under section 404 of the Clean Water Act, operations that discharge dredged or fill material into U.S. waters are required to obtain a permit from the Corps. Discharges under this permit must have a state certification to ensure the discharge meets water quality standards. In addition, under section 10 of the Rivers and Harbors Act of 1899, the Corps has regulatory authority to oversee construction activities within the navigable waters of the United States, and applicants may be required to obtain a permit from the Corps. Environmental Protection Agency (EPA). Applicants may be required under the Clean Air Act (CAA) to receive air permits for the construction and operation of LNG facilities. State environmental agencies generally issue these permits, but EPA can issue the permits if a state is not authorized to issue permits, or under other limited circumstances. EPA also comments on the FERC draft and final EIS, as required by the CAA. Applicants may also be required by law to consult with these and other federal agencies, such as the National Oceanic and Atmospheric Administration and the Fish and Wildlife Service, to ensure their applications comply with federal laws such as the Endangered Species Act, the Migratory Bird Treaty Act, the Magnuson-Stevens Fishery Conservation and Management Act, and the Fish and Wildlife Coordination Act. In addition to federal permits and consultations, applicants may also be required to obtain other permits under state and local law. Because of the wide variety of projects, locations, and state and local laws, permitting requirements vary by project. The applicant is responsible for identifying the necessary permits and consultations and reporting these to FERC as part of the pre-filing process. In addition to issuing most air permits and water quality certifications, states and local agencies have other permitting and consultation responsibilities, such as to consult with applicants to ensure compliance with the Coastal Zone Management Act and the National Historic Preservation Act. We provided a draft of this product to FERC and the DOE for their review and comment. DOE and FERC provided technical comments, which we incorporated throughout the report as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 5 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Chairman of FERC, the Secretary of Energy, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. For the purposes of this report, GAO developed table 1 below to allow us to use a single name to refer to related applications to the Federal Energy Regulatory Commission (FERC) and the Department of Energy (DOE). Table 1 lists (1) the names of applicants that submitted requests to FERC to construct liquefied natural gas (LNG) export facilities, (2) the names of applicants that submitted requests to DOE to export LNG from those facilities, and (3) the name GAO used to refer to these applications. In some cases, multiple companies filed jointly for one application. In addition to the individual named above, Christine Kehr (Assistant Director), Cheryl Harris, and David Messman made key contributions to this report. Important contributions were also made by Mark Braza, Michael Kendix, Alison O’Neill, Dan Royer, and Barbara Timmerman. | Technological advances in hydraulic fracturing and horizontal drilling have resulted in a dramatic increase in the amount of natural gas that can be produced domestically. DOE is responsible for reviewing applications to export LNG—natural gas cooled to a liquid state for transport—and, under the Natural Gas Act, must approve an application unless it finds that approval is not consistent with the public interest. Since 2010, DOE has received 35 applications to export LNG that must address the public interest question. In addition, under NEPA, FERC is required to assess how LNG export facilities may affect the environment and is responsible for granting approval to build and operate export facilities. Since 2010, FERC has received 17 applications to construct export facilities. GAO was asked to report on the federal process for reviewing applications to export LNG. This report describes (1) the status of applications to export LNG and DOE's process to review them and (2) the status of applications to build LNG export facilities and FERC's process to review them. GAO reviewed laws, regulations, and guidance; examined export approvals; visited LNG facilities; and interviewed federal and state agency officials and industry representatives, including LNG export permit applicants. GAO is not making any recommendations in this report. Since 2010, of 35 applications it has received that require a public interest review, the Department of Energy (DOE) has approved 3 applications to export liquefied natural gas (LNG) and 6 applications are conditionally approved with final approval contingent on the Federal Energy Regulatory Commission's (FERC) issuance of a satisfactory environmental review of the export facility. DOE considers a range of factors to determine whether each application is in the public interest. After the first application was conditionally approved in 2011, DOE commissioned a study to help it determine whether additional LNG exports were in the public interest. Since the 16-month study was published in December 2012, DOE issued 7 conditional approvals (one of which became final) and 1 other final approval (see fig. below). In August 2014, DOE suspended its practice of issuing conditional approvals; instead, DOE will review applications after FERC completes its environmental review. DOE LNG Export Application Status Since 2010, FERC has approved 3 LNG export facilities for construction and operation, including 2 facilities in 2014, and is reviewing 14 applications (see fig. below). FERC's review process is, among other things, designed to fulfill its responsibilities under the National Environmental Policy Act (NEPA). Before submitting an application to FERC, applicants must enter an initial stage called pre-filing to identify and resolve potential issues during the earliest stages of a project. Of the 14 applications, 5 are in the pre-filing stage at FERC and not shown in the figure below. FERC conducts an environmental and safety review with input from other federal, state and local agencies. FERC LNG Export Facility Application Status |
IQA consists of two major elements. The first element of IQA required OMB by the end of fiscal year 2001 to develop and issue guidelines that provide policy and procedure guidance for federal agencies to use for “ensuring and maximizing quality, objectivity, utility, and integrity of information, including statistical information,” that they disseminate. The second element required federal agencies covered by the Paperwork Reduction Act to develop IQA guidelines by the end of fiscal year 2002, establish administrative mechanisms allowing “affected persons” to seek and obtain correction of information maintained and disseminated by the agencies, as well as periodically report to the Director of OMB about the number and nature of IQA complaints and how they handled such complaints. IQA builds on previous federal efforts to improve the quality of information, including OMB Circular A-130 and the Paperwork Reduction Act of 1980, as amended. For example, two of the purposes of the Paperwork Reduction Act were to “improve quality and use of federal information … and provide for the dissemination of public information … in a manner that promotes the utility of the information to the public and makes effective use of information technology.” IQA requires, among other things, that executive branch agencies manage their information resources to “improve the integrity, quality, and utility of information to all users within and outside an agency.”7, 8 OIRA, which develops and oversees the implementation of governmentwide policies in the areas of information technology, privacy, and statistics, had responsibility for developing the governmentwide IQA guidelines and helping agencies to meet the act’s requirement that they develop their own guidelines. In an October 2002 memorandum describing the implementation of IQA guidelines, OIRA’s then administrator stated he considered the IQA guidelines a continuation of the executive branch’s decades-long focus on improving the quality of information federal agencies collect and disseminate. The memorandum added that agencies’ implementation “of the Information Quality Law represented the first time that the executive branch has developed a governmentwide set of information quality guidelines, including agency-specific guidelines tailored to each agency’s unique programs and information.” Agencies’ guidelines, which were to follow OMB’s model, were to include administrative mechanisms that allow “affected parties”—as defined by the agencies—to request correction of information that they did not consider correct. 44 U.S.C. § 3506(b)(1)(C). No hearings or debates were held or committee reports filed before IQA was enacted as part of the Treasury and General Government Appropriations Act for Fiscal Year 2001. OMB set up a framework for federal agencies to follow in implementing IQA, including providing assistance and direction to agencies in developing agency IQA guidelines and requiring them to post IQA information on their Web sites. However, we were not able to locate any IQA information on about half of the independent agencies’ Web sites that we examined, nor could we find Federal Register notices about IQA guidelines for them. According to OMB officials and OIRA’s then administrator, OIRA concentrated its communication and other outreach efforts on cabinet- level and regulatory agencies. In written comments on a draft of our report, OIRA noted that in working with agencies to develop and implement information quality measures, it will consider the needed resources for and the potential benefits of such measures. Further, in a number of cases where IQA information was posted online, locating the information was difficult. Agency IQA officials with whom we met noted that their IQA correction mechanism is a formal process and one of a number of correction mechanisms available to the public for having information errors corrected. OMB set up a framework for agencies to follow in implementing IQA and provided assistance and direction to agencies in developing their guidelines. As required by IQA, OMB issued the basic set of governmentwide IQA guidelines that agencies used as the basis for developing their own guidelines. These guidelines explained what agencies were to do to help ensure the development and public dissemination of quality information. In developing these guidelines, OIRA espoused three underlying principles that agencies were to reflect in their guidelines: The guidelines are to apply to a wide variety of government information dissemination activities that may vary in importance and scope. Agencies are to meet basic information quality standards, noting that the more important the information, “the higher the quality standards to which it should be held,” but that “agencies should weigh the costs … and the benefits of higher information quality in the development of information.” Agencies are to apply the guidelines in “a common-sense and workable manner,” meaning that agency guidelines are not to “impose unnecessary administrative burdens that would inhibit the agencies from continuing to take advantage of the Internet and other technologies to disseminate information that can be of great benefit and value to the public.” The guidelines, in elaborating on this last principle, explained that “OMB encourages agencies to incorporate the standards and procedures required by these guidelines into their existing … administrative practices rather than create new and potentially duplicative or contradictory processes.” The guidelines also noted that they were written to provide agencies with flexibility as they developed their own guidelines. Moreover, the guidelines defined four key concepts related to the dissemination of information—quality, objectivity, utility, and integrity— and described how quality was the outcome of the other three components. These guidelines further explained that agencies were to mirror these principles and actions in establishing their own guidelines and to include an administrative mechanism that data users who find mistakes in any agency’s public data or information can use to petition for correction. This mechanism was to include an appeals process, which allows a petitioner to request that an agency reconsider its initial decision about the correction request. The guidelines’ wording about the administrative correction mechanism allowed agencies to avoid duplicating the public comment process required by the rulemaking procedures under the Administrative Procedure Act, in which interested persons are given the opportunity to comment on proposed rules. In addition to writing the governmentwide IQA guidelines, OIRA took other steps to help agencies implement the principles and standards of IQA. As part of helping agencies to develop their guidelines, OIRA offered them assistance, including outreach to agencies such as conducting workshops on drafting guidelines, and reviewed their guidelines. IQA officials from a number of agencies, including the Departments of Defense and Justice, told us they considered this assistance beneficial. OIRA officials also issued memorandums to clarify how agencies were to satisfy the law and otherwise implement IQA, including requiring agencies to post IQA guidelines and related information on their Web sites. Further, OIRA put in place the mechanism for agencies to provide OMB with their annual IQA reports on their implementation of IQA, the number of IQA requests and appeals, and their status. According to OIRA staff and officials and agency memorandums, OIRA monitored IQA correction requests received by agencies and assisted them in developing their responses. Agency officials told us that OMB’s revisions consisted of comments that ranged from editorial to significant and primarily involved IQA requests pertaining to substantive issues. For example, agency officials and OMB staff explained that OMB at times asked for more detailed explanations, including references to other relevant information, in agency responses to correction requests. According to these officials, OMB’s review did not cause changes that would have substantially changed the agencies’ ultimate decision. We found no indication that OMB’s involvement substantially changed agencies responses when we examined nine specific IQA requests from four agencies. As described in figure 1, agencies covered by IQA were to have their guidelines and the correction and appeals mechanism in place by the start of fiscal year 2003 (October 1, 2002). The figure also shows that in April 2004, OMB reported to Congress in response to a mandate that OMB report on the first year—fiscal year 2003—of the implementation of the act. That report included information about the characteristics of the correction requests as well as the sources of the requests, and commented on a number of common perceptions and concerns about the act. OMB, of its own volition, in December 2005, updated this information and included it in a chapter in its report to Congress on the costs and benefits of federal regulations. In this report, OMB provided information on the implementation of IQA in fiscal year 2004 and compared fiscal years 2003 and 2004 IQA information. According to OMB and OIRA staff and officials and OIRA’s then administrator, OIRA concentrated its efforts to implement IQA on cabinet- level and regulatory agencies. In addition to working with the cabinet agencies to create IQA guidelines, OIRA staff stated they also focused their attention on regulatory agencies and commissions, including EPA. OIRA did not clarify for many independent agencies—especially smaller, nonregulatory ones—whether the law applied to them or generally follow up with them to help them meet the act’s provisions. By the fiscal year 2002 deadline, 14 of the 15 cabinet-level agencies had guidelines in place (see table 1). Further, following the flurry of activities to help agencies develop their IQA guidelines by October 1, 2002, OIRA shifted its emphasis away from helping agencies develop their IQA guidelines to helping agencies that already had guidelines to address IQA correction requests. According to OIRA staff, since November 2002 OIRA has not promulgated additional guidance regarding the development of IQA guidelines to agencies. Only one cabinet-level agency, DHS, the newest and one of the largest federal agencies, has no department-level IQA guidelines covering its 22 agencies, which issue a wide array of information used by the public. Because DHS was not created until January 2003—after IQA was enacted and IQA deadlines had passed—OMB began working with DHS officials to develop department-level guidelines after the other cabinet-level and independent agencies had their guidelines in place, according to OMB’s April 2004 report to Congress. As of March 2006, however, DHS did not have its IQA guidelines in place and officials did not have a deadline for establishing them. Also, while 5 DHS component agencies had IQA guidelines before they became part of DHS, the guidelines of 4 of the 5 component agencies—the Coast Guard, Customs and Border Protection, FEMA, and Secret Service—are still linked to their previous parent departments or otherwise have not been updated by DHS. For example, the IQA guidelines for the Coast Guard, which was previously part of the Department of Transportation (DOT), instructed information users submitting IQA requests to file via DOT’s Docket Management System, the administrative mechanism that DOT directs the public to use to file correction requests. Additionally, FEMA has not updated its guidelines since becoming part of DHS. DHS officials told us that the component agencies may update their guidelines after DHS has its departmentwide guidelines in place. Until that occurs, it is unclear what appeals process the public would follow and how DHS agencies will make final decisions about IQA correction requests. Moreover, when we checked the Web sites of 91 independent agencies, we did not find IQA guidelines posted on the Web sites of 44 of those agencies. (See app. II for the list of independent agencies and the status of their guidelines at the end of May 2006.) These 44 commissions, agencies, and other independent entities gave no indication of any IQA guidelines or IQA reports, nor any mention of IQA on their Web sites or on OMB’s Web site of agencies’ IQA guidelines. We also could not find these agencies’ Federal Register notices announcing the establishment of their IQA guidelines, although OMB required these notices. Also, OIRA staff did not have copies of the guidelines and said that they had focused their attention on cabinet agencies and regulatory agencies. These 44 agencies represented a broad spectrum of entities—including fact-finding agencies, such as the U.S. Civil Rights Commission; research organizations, such as the Smithsonian Institution; and others, such as the U.S. Trade and Development Agency— that produce a wide range of publicly disseminated information. In commenting on this report, the acting OIRA administrator noted that OIRA will take into account the resources that would be needed and the potential benefits that would be realized in working with agencies “to develop and implement information quality measures.” Even when agencies posted IQA information on their Web sites as OMB required, such information was hard to access, making it difficult for information users to know whether agencies have IQA guidelines or how to request correction of agency information. As part of the governmentwide IQA guidelines, OIRA required agencies to post their draft agency-specific IQA guidelines online by September 30, 2002, and to inform the public about them and solicit comments. However, we found it difficult to locate IQA information on agency Web sites. In addition to the difficulties of trying to find whether the independent agencies’ Web sites contained IQA guidelines, we had problems finding IQA guidelines on the Web sites of the 14 cabinet-level and 5 independent agencies that we knew had those guidelines. Of these 19 cabinet-level and independent agencies with IQA guidelines that we reviewed, only 4 agencies—the Departments of Agriculture, Commerce, Energy, and the Interior—provided a direct IQA “information quality” link on their home pages, which likely would be relatively easy for the public to use to access IQA information. In the case of the 15 other agencies, we found that accessing IQA information on their Web sites was difficult because these agencies provided no discernable link to IQA information on their home pages; provided access to their guidelines and other information through “contact us,” “policies,” or other less-than-obvious links, such as “resources”; or required multiple searches using various terms related to IQA, as was the case with the Department of Defense and the Department of State. Although OIRA directed agencies to post IQA information online, OIRA’s guidance is not specific about how agencies should provide access to online IQA information. Moreover, agency IQA officials told us that OMB did not provide guidance about where to place IQA information on their Web sites or what kind of access— or transparency—to provide. Agency IQA officials from a number of agencies stated that access to their Web-based IQA information was not “user-friendly” and said they were working to make IQA information more transparent and easily accessible. OMB is aware of the need to improve the public’s access to IQA information. In its April 2004 report to Congress, OIRA acknowledged the need for agencies to improve the transparency of IQA information and recommended that agencies include on their public Web sites IQA correction requests, appeals, and agency responses to them, as well as the agencies’ annual IQA reports to OMB. OMB and OIRA subsequently issued additional directives to facilitate the public’s ability to access government information and the process to request correction of erroneous public information. For example, in August 2004, responding to “inconsistent practices regarding the public availability of correspondence regarding information quality requests,” OIRA’s administrator issued a memorandum instructing each agency to post its IQA documents online by December 1, 2004. From fiscal year 2003 to fiscal year 2004, three agencies shifted to using IQA to address primarily substantive requests—those dealing with the underlying scientific, environmental, or other complex information—which declined from 42 to 38. The total number of all IQA requests dropped from over 24,000 in fiscal year 2003 to 62 in fiscal year 2004. The overwhelming cause for this decline was that in fiscal year 2004 FEMA no longer classified requests to correct flood insurance rate maps as IQA requests or addressed them through IQA. The decline in the number of IQA requests does not indicate that there was a corresponding decrease in agency workloads. In fiscal year 2003, agencies reported having received over 24,600 IQA correction requests, with FEMA’s 24,433 requests accounting for over 99 percent of the year’s total. FEMA’s requests were all related to flood insurance rate maps. Eighteen other agencies accounted for the balance of the year’s requests (183), 54 of which resulted in changes in information, including clarifying language. In fiscal year 2004, FEMA, with OMB’s approval, no longer classified flood insurance rate map correction as IQA requests. Instead, FEMA addressed flood insurance rate map correction requests by using a correction process it had implemented prior to the enactment of IQA. Largely as a result of this change and a similar change by two other agencies—the Department of Labor’s Occupational Safety and Health Administration (OSHA) and DOT’s Federal Motor Carrier Safety Administration—in fiscal year 2004, 15 agencies reported a total of 62 IQA correction requests to OMB. Of these, 26 requests resulted in changes. As shown in table 2, from fiscal year 2003 to fiscal year 2004, the number of substantive requests declined in terms of their total numbers, decreasing from 42 in fiscal year 2003 to 38 in fiscal year 2004. As shown in table 2, during fiscal years 2003 and 2004, over half of the substantive IQA correction requests originated from businesses, trade groups, or other profit-oriented organizations, and over one-quarter were generated by nonprofit or other advocacy organizations. (For a list of these requesters, see app. III.) Substantive requests generated by individual citizens declined from about 1 in 7 of substantive requests to about 1 in 10. Substantive requests in fiscal year 2004 represented a greater proportion of IQA correction requests than in fiscal year 2003, excluding FEMA flood insurance rate map correction requests. Out of 183 non-FEMA requests in fiscal year 2003, 42—or almost one-fourth—were substantive in nature. Addressing these substantive requests required considerably more time and staff resources than simple or administrative requests. OMB and agency officials considered the other 141 requests—over three-fourths—to be of a simple or administrative nature—for example, requests to correct errors in photo captions, personal information, or Internet addresses. Agencies were able to quickly correct these simple or administrative requests—correcting 17 requests took 7 or fewer days from the date the agencies received them. In fiscal year 2004, of 62 total IQA requests, 38 requests—almost two-thirds—were considered to be substantive. Table 3 shows the 80 substantive requests for fiscal years 2003 and 2004 by category of petitioner, agency, and status of requests, as of May 2006. One reason that substantive requests in fiscal year 2004 represented an increased percentage of total IQA correction requests compared with fiscal year 2003 is that in fiscal year 2004 some agencies decided to exclude simple or administrative errors from IQA correction mechanisms. Specifically, according to agency IQA documents and OMB’s December 2005 report, in fiscal 2004, FEMA, the Department of Justice, the Federal Motor Carrier Safety Administration, and OSHA no longer classified and addressed most simple or administrative types of errors as IQA correction requests. As a result, the majority of the correction requests that remained to be processed through IQA were substantive requests. For example, in fiscal year 2004, the Department of Health and Human Services’ (HHS) National Institutes of Health received a request related to information about smokeless tobacco; EPA received a request challenging information related to the water conservation benefits of water utility billing systems of multifamily housing; and the Department of the Interior’s Fish and Wildlife Service received a request that challenged information used to protect the Florida panther. We also found that no one agency dominated or accounted for the majority of fiscal year 2004 requests. In fact, in fiscal year 2004 the distribution of requests was more broadly spread across agencies than in fiscal year 2003, with EPA and the National Archives and Records Administration (NARA) each reporting 12 correction requests, and HHS reporting 9 requests to OMB. A few agencies did not experience a decrease in the total number of IQA requests because they did not shift simple requests away from IQA or otherwise change how they processed such requests during the 2-year period. For example, according to OMB and NARA IQA documents, NARA’s IQA requests—8 in fiscal 2003 and 12 in fiscal 2004—continued to be simple in nature and came primarily from individuals in both years. For the same 2 years, EPA’s 25 requests and HHS’s 19 requests were nearly all substantive and mainly came from businesses or profit-oriented organizations as well as nonprofits or advocacy groups. In fiscal years 2003 and 2004, the simpler and more administrative the initial request, the more likely an agency was to correct the information without appeal. For example, during the 2-year period, NARA corrected or clarified information for 16 of the 20 IQA correction requests it received, which were all considered to be simple in nature. Conversely, the more significant the correction request, the lower the likelihood of a change. HHS, for example, addressed 19 IQA requests that were substantive but changed information for only 5 based on the initial request or an appeal. Regardless of the complexity of the request, agency IQA documents showed that agencies addressed all requests filed during the 2-year period. Substantial requests were less likely to result in an initial information change but more likely to be appealed than simple or administrative requests. Few petitioners appealed agency decisions regarding simple or administrative requests. None of 131 “simple or administrative” fiscal year 2003 IQA requests from the Departments of Transportation, Labor, and the Treasury and NARA was appealed. By comparison, of the 80 substantive requests over the 2-year period, petitioners appealed 39 (almost half) of the agencies’ decisions. Of the 39 requests that were appealed, 25 were denied and 8 appeals resulted in information changes. Table 4 shows the outcome or status of the appeals filed during fiscal years 2003 and 2004, as of the end of March 2006. Two of the 39 appeals still have outcomes pending after more than 2 years, demonstrating that although the number of appeals may be considered small, the impact on agency operations may be significant, depending on the complexity of the specific issue. For example, in table 4, the EPA appeal pending—filed by the U.S. Chamber of Commerce in April 2005—affects 16 EPA databases that deal with such issues as wastewater treatment and the bioaccumulation of organic chemicals. This case has been ongoing for over 2 years, and could have effects on assessments regarding human health risks, other environmental impacts, and cleanup decisions. Also listed in table 3 is another IQA appeal filed in October 2003 by a private individual. The initial request for correction was filed in January 2003 before the DOT’s Federal Aviation Administration (FAA) challenging the analytical basis for its “age 60 rule” that forces air carrier pilots out of service at age 60. FAA upheld its “age 60 rule” in September 2003, but the complainant filed an appeal in October 2003 and filed additional amendments thereafter. The request was still pending at the time we completed our study, more than 3-½ years after the initial IQA request was made and almost 3 years after the appeal. As for the source of appeals, businesses, trade groups, and other profit- oriented organizations filed more appeals than other types of organizations or individuals. Businesses and profit-oriented organizations accounted for 25 of the 39 appeals of IQA requests filed during fiscal years 2003 and 2004. Of these 25 appeals, 4 resulted in changes. Appeals from advocacy/nonprofit groups resulted in 1 change from 5 appeals. Appeals from private citizens resulted in 3 changes from 7 appeals. The most appeals—25, or almost two-thirds of them—were filed with EPA, HHS, and the Department of the Interior. Those agencies also received nearly two- thirds of the requests that were classified as substantive. The impact of IQA on agencies could not be determined because agencies and OMB do not have mechanisms in place to track the effects of implementing IQA. Agencies and OMB do not capture IQA workloads or cost data, nor do they track the impact of IQA requests or resulting information changes. However, evidence indicates that in at least some cases, addressing IQA requests and appeals can take agencies 2 years or longer to resolve and requires a wide range of staff, particularly if IQA correction requests center on substantive matters. More specifically, none of the agencies we visited had information about the actual workload, the number of staff days, or other costs, with one exception. Agency IQA officials told us they do not collect such data. They explained that their agencies did not capture specific workload or cost data related to establishing IQA guidelines, nor do they track workload or cost data involved in responding to IQA requests or have mechanisms to measure any impact IQA information changes have on operations or the quality of information. Officials at two agencies—the National Aeronautics and Space Administration and the Department of the Interior’s Fish and Wildlife Service—considered developing systems to track IQA costs but did not. Fish and Wildlife Service officials told us they decided against implementing an IQA cost tracking system because of the declining number of requests they have received since fiscal years 2003 and 2004 and the high cost and administrative complexities of setting up such a system. Additionally, IQA officials told us that addressing IQA requests is considered to be part of their agencies’ day-to-day business, and because of the multifaceted nature of some requests, allocating time and resources to one specific issue or linking work exclusively to IQA requests would be difficult. For example, Fish and Wildlife Service officials stated that when agency biologists work on IQA requests, they are also frequently working on broad biological, environmental, and related issues that go beyond a given request and relate to other agency work, so it would be difficult to allocate the biologists’ time among various codes. In their view, selecting a specific code would be somewhat arbitrary, and time or other codes would not necessarily accurately reflect the cross-cutting nature of the biologists’ work. Moreover, according to agency officials and OMB staff, neither the agencies nor OMB have mechanisms in place to track the effects of implementing the law. Agency IQA officials and OIRA staff and officials told us that administering IQA has not been overly burdensome and that it has not adversely affected agencies’ overall operations to date. Agencies IQA officials told us they gave IQA responsibilities to various staff within their agencies—generally in offices already responsible for information-related issues—and that no staff are dedicated exclusively to administering IQA. For example, most agencies have folded responsibilities for IQA, including setting up guidelines, into the office of the chief information officer or their public affairs unit. In addition, although they track the status of IQA correction requests, they do not track changes resulting from IQA requests or appeals. Although there is a lack of comprehensive IQA-related cost or resource data, evidence suggests that certain program staff or units involved in creating IQA guidelines, including the correction mechanism, and addressing IQA correction requests have seen their workloads increase without any corresponding increase in resources. For example, officials at the Fish and Wildlife Service, HHS’s National Institutes of Health, the Department of Commerce’s National Oceanic and Atmospheric Administration, and the Department of Defense’s Army Corps of Engineers estimate the costs of addressing IQA requests are “many thousands of dollars” because of the number of high salary professional staff, such as biologists, toxicologists, engineers, and managers, who review and respond to substantive requests and appeals and the extensive time involved. According to agency IQA officials and OMB staff, agencies did not receive funds for IQA, and the act did not specify any funds for implementing IQA. Moreover, our analysis of IQA requests shows that agencies have taken from 1 month to more than 1 year to produce a final decision on substantive IQA requests and appeals, while 2 appeals made during fiscal years 2003 and 2004 are still ongoing after 2 years or longer. However, evidence does not exist showing the resources allotted to those appeals over the 2-year period in question. The following IQA requests illustrate the length of time it can take to address an IQA correction, regardless of the final outcome. On March 10, 2004, a group of trade associations and organizations primarily representing the residential and commercial properties sector submitted an IQA request to EPA challenging the accuracy of an EPA statement that water allocation (submetering) billing systems in apartment buildings and other multifamily housing did not encourage water conservation. This statement was in a Federal Register notice regarding the applicability of the Safe Drinking Water Act to submetered properties. The group did not consider the statement to be correct regarding one type of allocation system in particular—Ratio Utility Billing Systems. According to EPA documents and officials, EPA’s response to the request and subsequent appeal involved a number of EPA staff, including senior executives, scientists, and others in the Office of Water and other headquarters units. The appeal itself was reviewed by a three-member panel of senior executives. EPA took a total of almost 5 months (146 days) to respond to the initial correction request, well over the 90-day goal stated in EPA’s IQA guidelines, and almost 11 months (323 days) more to decide on the appeal, over three times longer than the 90-day appeals goal in EPA’s guidelines, according to our analysis of EPA IQA requests. The nearly 15-month total response time was not unusual compared to other EPA processing times for IQA requests. The lengthy response time was in part due to EPA waiting for the completion of a related study—under way at the time of the correction request—before making a final decision about revising its submetering policy. On September 28, 2005, EPA ultimately denied the appeal and did not change its statement, citing the results of the study as not showing that Ratio Utility Billing Systems encouraged water conservation. On May 4, 2004, a nonprofit organization representing public sector employees involved in the environment and an individual federal employee submitted an IQA request to the Fish and Wildlife Service about alleged errors in agency documents, including the Multi-Species Recovery Plan and the draft Landscape Conservation Strategy, which are intended to protect the endangered Florida panther. The request and subsequent appeal involved previously identified errors in peer- reviewed research associated with the definition of panther habitat, as well as estimates of panther population and models used to determine strategies to help the panther species survive and recover in Florida. Fish and Wildlife Service staff who evaluated and responded to the initial request and to the appeal included senior executives, attorneys, field biologists, and other professional staff from a number of offices within headquarters, including the program offices, the Solicitor’s Office, the External Affairs Office, and the Director’s Office, as well as field offices in Vero Beach and Jacksonville, Florida, and the regional office in Atlanta. The administrative appeals panel for the correction request consisted of executives from Fish and Wildlife Service headquarters and its Northwest Regional Office and Interior’s U.S. Geological Survey. Although the service responded to the initial request 2 months after its receipt, it took more than 7-½ months (over 230 calendar days) to respond to the appeal. While the initial response was consistent with the Service’s 45-business day response time stated in the guidelines, the appeal took over 6 months more than the guideline’s 15- business day appeal time frame, according to our analysis. The nearly 300-day total response time was not unusual compared to other Fish and Wildlife Service processing times for IQA requests. On March 16, 2005, the Fish and Wildlife Service suspended the draft conservation strategy for the panther, corrected other key documents, posted notices on the regional and Vero Beach agency field office Web sites about these actions, and revised and published for public comment the panther section of the agency’s recovery plan. According to OMB staff and agency IQA officials, IQA correction requests have not adversely affected agency rulemaking procedures to date, partly because agencies handled most IQA requests related to rulemaking as public comments to proposed rules under the Administrative Procedure Act rather than as IQA requests. This approach, described in a number of agencies’ IQA guidelines, including EPA’s and the Department of Agriculture’s, was followed to avoid duplicating the rulemaking comment process and diverting resources away from the rulemaking process. It should be recognized that IQA correction requests could affect rulemaking outside of the formal rulemaking process. For example, IQA correction requests that are filed before an agency’s formal rulemaking process begins could affect when or if an agency initiates a rulemaking. We found 16 requests for corrections submitted during fiscal years 2003 and 2004 to be related to agency rulemaking. According to our analysis of IQA requests, annual IQA reports sent to OMB, and OMB’s own reports, and as later confirmed by OMB, five agencies reported having received 16 IQA requests related to rulemaking for the 2-year period. These five agencies were EPA, the Fish & Wildlife Service, the Department of Agriculture’s Forest Service, the Department of the Treasury’s Alcohol and Tobacco Tax and Trade Bureau, and DOT. These 16 requests—touching on a diverse range of issues, such as air safety, alcohol, chemicals, and the environment—accounted for almost 1 in 5 substantive requests for the 2 years. The Fish and Wildlife Service received the largest number of rulemaking-related IQA requests out of the 16 requests related to regulations or rules during fiscal years 2003 and 2004. Seven of the Service’s 11 requests were related to proposed rulemaking. These 7 requests represented 44 percent of all rulemaking-related IQA requests received by all agencies during the 2 years. The agencies treated 10 of the 16 requests that they received during the 2-year period as comments to proposed rules rather than processing them as IQA requests, and the agencies so informed the IQA petitioner. For example, the Alcohol and Tobacco Tax and Trade Bureau considered an IQA request regarding flavored malt beverages and related proposals as comments to a proposed rule. The bureau informed the IQA petitioner that it was handling the request as a public comment under the procedures of the Administrative Procedure Act, rather than as an IQA correction request. Agencies similarly processed the other nine requests related to regulations or rulemaking. As for the other six IQA requests related to rulemaking or regulations, agencies rejected two, are developing responses to two, and were—as of the end of March 2006—awaiting additional information or court decisions before responding to the remaining two. OMB’s governmentwide IQA guidelines provide agencies with flexibility to develop their own guidelines to suit their missions. Having executive branch agencies use the Internet to inform the public about the existence of their IQA guidelines, including the IQA correction mechanism, is a step toward improving the transparency of how agencies develop and disseminate information and address information errors, as well as how information users can seek correction of information. Given the current status of IQA at agencies, OMB has before it additional opportunities to build on its efforts in implementing IQA so far, a mission on which it embarked a few years ago. For example, it could draw from its experience of working with cabinet and many independent agencies to put additional agency-specific guidelines in place. Likewise, OMB could apply the knowledge from the lessons it and agencies have learned about posting accessible, user-oriented information on agency Web sites. By working with agencies and tapping into public input, OMB could enhance agencies’ and the public’s involvement in promoting high-quality agency information as well as increasing the public’s access to and confidence in that information, thereby helping to further the goal of disseminating quality information. To help ensure that all agencies covered by IQA fulfill their requirements, including implementing IQA guidelines and helping to promote easier public access to IQA information on agency Web sites, we recommend that the Director of OMB take the following three actions: work with DHS to help ensure it fulfills IQA requirements and set a deadline for doing so; identify other agencies that do not have IQA guidelines and work with them to develop and implement IQA requirements; and clarify guidance to agencies on improving the public’s access to online IQA information, including suggestions about clearer linkages to that information, where appropriate. In written comments on a draft of this report, the Acting Administrator of OMB's OIRA responded to our recommendations. Regarding our draft report's recommendation to OMB to work with DHS and other agencies not meeting IQA requirements, the Acting Administrator stated that OMB fully supports our recommendation that DHS develop IQA guidelines and that OMB would continue to work with DHS to that end. In our draft report, we had one recommendation for OMB to work with DHS and other agencies to develop IQA guidelines. Based on OIRA's comments, in our final report we made two separate recommendations regarding DHS and the other agencies developing IQA guidelines. Further, we believe that as OIRA continues to work with DHS—which has 22 component agencies—setting a deadline for DHS to implement IQA guidelines is important. As for the other agencies (many of which are small) without IQA guidelines, OIRA stated it would work with them as they develop and implement information quality measures. OIRA stated that in those efforts, it would consider the resources that would be needed and the potential benefits that would be achieved by having IQA guidelines in place. Regarding our recommendation about public access to online IQA information, OIRA noted it shares GAO's interest in improving public access and will continue to work with agencies to improve dissemination of IQA information. OIRA also provided separate technical corrections and suggestions to the draft of our report, which we have incorporated as appropriate. The written comments are reprinted in appendix IV. As agreed with your offices, unless you release its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time we will send copies to other interested congressional committees and the Acting Administrator of OIRA. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me on (202) 512-6806 or by e-mail at farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Robert Goldenkoff, Assistant Director; Ernie Hazera, Assistant Director; Andrea Levine; Keith Steck; and Margit Willems Whitaker. To assess the Office of Management and Budget’s (OMB) role in implementing the Information Quality Act (IQA), we reviewed OMB’s IQA documents, including memorandums sent to agencies, and interviewed Office of Information and Regulatory Affairs (OIRA) staff involved with IQA. In addition, we reviewed IQA documents—including guidelines, requests and appeals, agency decisions, and related documents—and interviewed IQA and other knowledgeable officials at the 17 federal agencies identified in table 5. While we reviewed IQA guidelines at all cabinet-level agencies, we conducted interviews at 5 independent agencies and 12 of the 15 federal cabinet agencies and at least one component of each, as shown in table 5. We selected these agencies to obtain a cross section of agencies that reflect the diverse range of government activities. We made our selection to cover a wide range of criteria, including the organization’s size (number of employees in fiscal year 2004); its mission (regulatory versus statistical, for example); and the nature of issues covered by the agency—such as the environment, health, and safety. We discussed with agency officials the development of their IQA guidelines, whether they had received requests for correction of information and how they addressed them, and what role OMB played in all of this. To further evaluate OMB’s role in the implementation of IQA, we reviewed OMB and agency IQA documents for all 15 cabinet agencies and the 5 independent agencies we contacted. These documents included online information, such as OMB memorandums and agency IQA guidelines, related IQA information, and OMB and agency IQA Web sites. Additionally, we reviewed the Web sites of 86 other independent agencies, including commissions, boards, and other entities, covered by the Paperwork Reduction Act to determine whether they had IQA guidelines online, but we did not survey them. Further, we reviewed the Federal Register for notices about these agencies’ IQA guidelines, as OMB required. We did not contact these 86 individual agencies or survey users of their Web sites, as this was beyond the scope of our review. Regarding the second objective of determining the number, type, and source of IQA requests, including who submitted them, for fiscal years 2003 and 2004, we contacted agency IQA officials and OMB staff and obtained relevant information from them. We also reviewed OIRA’s two reports to Congress to validate data collected through other sources. To the extent the information was available online, we reviewed IQA requests on agency Web sites. To supplement and verify the accuracy and completeness of this information, we interviewed agency and OMB IQA staff and officials. In addition, to categorize the sources of the requests by type of entity, such as business, trade group, or nonprofit advocacy organization, we relied on information from the sources and agency descriptions. We made our determination when information was contradictory or not available. Moreover, to determine the final status of IQA requests and any appeals, we reviewed related agency documents, including agency notification letters, and spoke with agency IQA officials about their status. We determined that OMB and agency data were sufficiently reliable for the purposes of this review. The results of our analysis differ from information in OMB’s two reports to Congress discussing IQA because of (1) differences between report information about IQA requests and information on agency Web sites and (2) minor report errors, including errors reported by agencies to OMB—such as IQA requests reported for calendar year 2003 instead of fiscal year 2003—that OMB repeated. In addition, we tracked the status of appeals to the end of March 2006 to provide current information, going beyond the end of fiscal year 2004, which is the date OMB used as the cutoff for appeal information in its December 2005 report. Regarding the third objective of examining whether the implementation of IQA has adversely affected agencies’ or overall operations in general and the rulemaking process in particular, we contacted agency IQA and other knowledgeable officials and OMB staff. We also attempted to determine the resources that OMB and agencies committed to implementing IQA by obtaining IQA cost and staff allocation data, but agency officials told us they do not track such information, although the Department of Labor had cost information on setting up a system on the status of IQA requests. In addition, we reviewed the annual IQA reports submitted to OMB by the cabinet-level agencies and the 5 independent agencies with guidelines where we conducted interviews. Moreover, to better understand specific aspects of IQA requests and how agencies addressed them, as well as to illustrate specific points, we reviewed in detail selected IQA requests at four agencies—the Environmental Protection Agency, the Department of Health and Human Services’ National Institutes of Health, the Department of Agriculture’s Forest Service, and the Department of the Interior’s Fish and Wildlife Service. Because OMB was still developing its IQA peer review policies at the time of our review, we did not discuss with agency officials their plans for carrying out these future requirements. In addition, although agencies have other mechanisms to correct information, we evaluated only the IQA information correction mechanism. We conducted our work in Washington, D.C., from March 2005 through July 2006 in accordance with generally accepted government auditing standards. and on Web site? Advisory Council on Historic Preservation AMTRAK (National Railroad Passenger Corporation) 10 Armed Forces Retirement Home 11 Broadcasting Board of Governors 13 Chemical Safety and Hazard Investigation Board 14 Christopher Columbus Fellowship Foundation 15 Commission Regarding Weapons of Mass Destruction 16 Commission on International Religious Freedom 17 Commission on Ocean Policy 18 Commodities Futures Trading Commission 19 Consumer Product Safety Commission 20 Corporation for National and Community Service 21 Court Services and Offender Supervision Agency 22 Defense Nuclear Facilities Safety Board 25 Equal Employment Opportunity Commission 26 Export-Import Bank of the United States Farm Credit System Insurance Corporation Federal Mediation and Conciliation Service Federal Mine Safety and Health Review Commission Federal Retirement Thrift Investment Board Federal Energy Regulatory Commission (Continued From Previous Page) and on Web site? Institute of Museum and Library Services 45 Merit Systems Protection Board 46 Migratory Bird Conservation Commission 49 National Aeronautics and Space Administration 50 National Archives and Records Administration 51 National Commission on Libraries and Information 52 National Capital Planning Commission 53 National Council on Disability 54 National Credit Union Administration 55 National Endowment for the Arts 56 National Endowment for the Humanities 57 National Indian Gaming Commission 58 National Labor Relations Board 61 National Transportation Safety Board 62 Northwest Power Planning Council 64 Nuclear Waste Technical Review Board 65 Occupational Safety and Health Review Commission 66 Office of Navajo and Hopi Indian Relocation 67 Office of Federal Housing Enterprise Oversight 68 Office of Government Ethics 69 Office of Personnel Management (Continued From Previous Page) and on Web site? W.K. Olsen and Associates, LLC Earth Island Institute, etc. (2) Sierra Club, etc. Center for Regulatory Effectiveness—same as Department of Health and Human Services filing Alliance for the Wild Rockies Competitive Enterprise Institute—same as Office of Science and Technology Policy filing Atlantic Salmon of Maine—same as Department of the Interior filing Associated Fisheries of Maine, Inc., etc. Center for Regulatory Effectiveness, et al. Public Employees for Environmental Responsibility Public Interest Group (identity not provided) National Wrestling Coaches Association, etc. Department of Health and Human Services Center for Regulatory Effectiveness, etc. (3)— one same as Department of Agriculture filing Animal Health Institute (2) SafeBlood Technologies, etc. Chemical Products Corporation (2) Styrene Information and Research Center, Inc. Salt Institute, etc. McNeil Consumer and Specialty Products National Legal and Policy Center American Chemistry Council (2) (Continued From Previous Page) Atlantic Salmon of Maine—same as Department of Commerce filing Chilton Ranch and Cattle Company Public Employees for Environmental Responsibility (2) National Association of Home Builders National Coalition for Asian Pacific American Community Development Diageo North America, Inc. Competitive Enterprise Institute (2) Association of Home Appliance Manufacturers McDowell Owings Engineering, Inc. Office of Science and Technology Policy (Executive Office of the President) Competitive Enterprise Institute—same as Department of Commerce filing Center for Regulatory Effectiveness, etc. (2) Friends of Massachusetts Military Reservation Morgan, Lewis & Bockius, LLP Geronimo Creek Observatory (4) Perchlorate Study Group (Continued From Previous Page) National Multi-Housing Council, etc. National Paint and Coatings Association, etc. National Association of Home Builders NPC Services, Inc. | The importance and widespread use of federal information makes its accuracy imperative. The Information Quality Act (IQA) required that the Office of Management and Budget (OMB) issue guidelines to ensure the quality of information disseminated by federal agencies by fiscal year 2003. GAO was asked to (1) assess OMB's role in helping agencies implement IQA; (2) identify the number, type, and source of IQA correction requests agencies received; and (3) examine if IQA has adversely affected agencies' overall operations and, in particular, rulemaking processes. In response, GAO interviewed OMB and agency officials and reviewed agency IQA guidelines, related documents, and Web sites. OMB issued governmentwide guidelines that were the basis for other agencies' own IQA guidelines and required agencies to post guidelines and other IQA information to their Web sites. It also reviewed draft guidelines and undertook other efforts. OMB officials said that OMB primarily concentrated on cabinet-level and regulatory agencies, and 14 of the 15 cabinet-level agencies have guidelines. The Department of Homeland Security (DHS) does not have department-level guidelines covering its 22 component agencies. Also, although the Environmental Protection Agency and 4 other independent agencies posted IQA guidelines and other information to their Web sites, 44 of 86 additional independent agencies that GAO examined have not posted their guidelines and may not have them in place. As a result, users of information from these agencies may not know whether agencies have guidelines or know how to request correction of agency information. OMB also has not clarified guidance to agencies about posting IQA-related information, including guidelines, to make that information more accessible. Of the 19 cabinet and independent agencies with guidelines, 4 had "information quality" links on their home pages, but others' IQA information online was difficult to locate. From fiscal years 2003 to 2004, three agencies shifted to using IQA to address substantive requests--those dealing with the underlying scientific, environmental, or other complex information--which declined from 42 to 38. In fiscal year 2003, the Federal Emergency Management Agency and two other agencies used IQA to address flood insurance rate maps, Web site addresses, photo captions, and other simple or administrative matters. But, in fiscal year 2004, these agencies changed their classification of these requests from being IQA requests and instead processed them using other correction mechanisms. As a result, the total number of all IQA requests dropped from over 24,000 in fiscal year 2003 to 62 in fiscal year 2004. Also, of the 80 substantive requests that agencies received during the 2-year period--over 50 percent of which came from businesses, trade groups, or other profit-oriented organizations--almost half (39) of the initial agency decisions of these 80 were appealed, with 8 appeals resulting in changes. The impact of IQA on agencies' operations could not be determined because neither agencies nor OMB have mechanisms to determine the costs or impacts of IQA on agency operations. However, GAO analysis of requests shows that agencies can take from a month to more than 2 years to resolve IQA requests on substantive matters. According to agency IQA officials, IQA duties were added into existing staff responsibilities and administering IQA requests has not been overly burdensome nor has it adversely affected agencies' operations, although there are no supporting data. But evidence suggests that certain program staff or units addressing IQA requests have seen their workloads increase without a related increase in resources. As for rulemaking, agencies addressed 16 correction requests related to rulemaking under the Administrative Procedure Act, not IQA. |
Congress authorized ITEF in December 2014 to provide assistance, including training and equipment, to military and other security forces of, or associated with, the government of Iraq, including Kurdish and Tribal Security Forces or other local security forces with a national security mission. As of December 31, 2016, DOD had obligated about $2.2 billion of the $2.3 billion Congress appropriated for ITEF in fiscal years 2015 and 2016 and had disbursed about $2 billion. See figure 1 for examples of equipment that DOD purchased with these funds, according to DOD documents. The process for providing ITEF-funded equipment to Iraq’s security forces generally falls into three phases: (1) acquisition and shipment, (2) staging in Kuwait and Iraq, and (3) transfer to the government of Iraq or the Kurdistan Regional Government (see fig. 2). Multiple DOD components are responsible for ensuring the visibility and accountability of ITEF-funded equipment throughout the ITEF equipping process up until U.S. personnel in Iraq transfer the equipment to vetted officials from the government of Iraq or the Kurdistan Regional Government. For example: In addition to maintaining SCIP, DSCA oversees program administration for the ITEF program and provides overall guidance to DOD components through its Security Assistance Management Manual (SAMM) and associated policy memos. CJTF-OIR manages the ITEF program for the U.S. Central Command (USCENTCOM) and maintains overall responsibility for providing ITEF assistance to Iraq’s security forces. DOD’s primary implementing agency, USASAC—supported by other DOD agencies—maintains overall responsibility for case development, execution, and closure. The Department of State’s OSC-I supports CJTF-OIR through each phase of the ITEF equipping process. It is also responsible for communicating ITEF program objectives to government of Iraq or Kurdistan Regional Government officials. The 1st TSC, in coordination with CJTF-OIR and OSC-I, receives, stages, and transports ITEF-funded equipment in Kuwait and Iraq and oversees the transfer of the equipment to vetted government of Iraq or Kurdistan Regional Government officials in Iraq. DOD generally administers ITEF-funded equipment purchases as individual building partnership capacity cases within the U.S. government’s Foreign Military Sales (FMS) infrastructure. An individual case may have multiple—sometimes thousands—of requisitions or procurement actions. DOD assigns a unique case identification number for each case so that DOD entities can track the case throughout the equipping process. According to DSCA documents, SCIP is designed to provide end-to-end transit visibility, including the status of defense articles and services, of FMS and building partner capacity cases to designated U.S. government personnel and representatives of foreign countries. It consolidates case data, including transportation information, into one place so that customers and program managers can have readily available information on the status of their cases. The portal imports case-related data from other DOD data systems containing logistics and transportation information, and SCIP users can also report data directly in SCIP. It is accessible over the Internet to authorized users anywhere in the world. SCIP includes a variety of different features for tracking defense articles and services, including equipment, which are organized into 13 different groups. Two of these 13 groups are the Security Cooperation Management Suite (SCMS) group and the Case Execution group (see fig. 3). The SCMS group—SCIP’s management reporting system—provides program managers and participants for Iraq and other countries with customizable and ad hoc management reports on the status of FMS and building partnership capacity cases. SCMS is populated with data from other groups within SCIP, DOD external data systems, and SCIP users. The Case Execution group contains the Enhanced Freight Tracking System (EFTS), a tracking system within SCIP containing building partner capacity and FMS shipment information. DSCA designed EFTS to serve as the single, authoritative tracking system for FMS and building partner capacity shipments. EFTS supplements and pulls shipment information from external DOD data systems. DOD personnel can also report shipment information, including transfer dates of equipment to the foreign government, directly in EFTS. According to a DSCA official, EFTS data should be captured in SCMS. Figure 3 below shows the relationship between EFTS and SCMS, including how each is populated. DOD components do not ensure that SCIP consistently captures key transportation dates of equipment funded by ITEF during each of the three phases of the ITEF equipping process. According to the DSCA SAMM, DOD components should use SCIP to identify the status and track the transportation of all building partner capacity materiel, such as ITEF. Furthermore, USCENTCOM ordered the 1st TSC, in coordination with a USASAC program manager, to ensure that ITEF-funded equipment transfer information is properly recorded in SCIP. Our analysis of completed ITEF-funded requisitions in SCMS, SCIP’s management reporting system, found that SCMS captured about 11 percent of 2,264 key transportation dates in all three equipping phases. DOD officials said that SCMS is not capturing such dates because of potential interoperability and data reporting issues in SCIP and other DOD data systems. Although DOD officials in Kuwait stated that they had begun to report some ITEF-funded equipment transfer dates in SCIP, DOD officials and contractors have had difficulty locating these dates in SCIP. DOD also could not fully account for ITEF-funded equipment transferred to the government of Iraq or the Kurdistan Regional Government because of missing or incomplete transfer documentation. Our analysis of 566 completed ITEF-funded equipment requisitions recorded in SCIP’s SCMS found that DOD components are not following the SAMM to consistently capture key transportation dates of ITEF- funded equipment in phase 1 of the ITEF equipping process in SCIP. For example, we found that only 256 of the 1,132 key transportation dates in phase 1 (about 23 percent) were captured in SCIP’s SCMS as of February 10, 2017. Specifically, 256 of the 566 requisitions included the date the equipment arrived at the last point of departure in the United States, and none of the 566 requisitions included the date the equipment was shipped from the United States to Kuwait or Iraq (see fig. 4). DSCA officials responsible for the management of SCIP’s SCMS attributed this lack of data to three potential issues related to interoperability in SCIP and external DOD data systems and data reporting in SCIP. First, SCMS may not be importing data correctly from other DOD data systems used by DOD components to track ITEF-funded equipment in phase 1. Second, SCMS may not be importing transportation data correctly from EFTS within SCIP as intended. Third, DOD components may not be reporting key transportation dates in EFTS or SCMS. For example, according to USASAC officials, USASAC does not report any ITEF-funded transportation dates in EFTS or SCMS because they rely on other DOD data systems for this information which officials said should be captured in SCIP. While USASAC provides some visibility on the transportation of ITEF-funded equipment to Kuwait or Iraq by case in daily tracking reports it produces based on information from other DOD data systems, these reports do not provide end-to-end transit visibility of the equipment from acquisition to transfer to the government of Iraq or the Kurdistan Regional Government. We did not independently determine the root cause for why these key transportation dates were not consistently captured in SCMS. By not ensuring that these key transportation dates in phase 1 are captured in SCIP’s SCMS, DOD components do not have readily available information to maintain visibility over and account for all ITEF-funded equipment. Our analysis of 566 completed ITEF-funded requisitions recorded in SCIP’s SCMS as of February 10, 2017 found that DOD components are not following the SAMM to capture the arrival dates of ITEF-funded equipment to Kuwait or Iraq in phase 2 of the ITEF equipping process in SCIP. Specifically, none of the 566 requisitions included the date the equipment arrived at U.S. staging facilities in Kuwait or Iraq (see fig. 5). DSCA officials responsible for managing SCIP’s SCMS said that they did not know why the arrival dates of ITEF-funded equipment were not being captured in SCMS and cited the same potential interoperability and data reporting issues for the lack of data in phase 2 as they did in phase 1. Also, USASAC officials responsible for overseeing the delivery of ITEF- funded equipment to Kuwait or Iraq said that they do not report the arrival dates of the equipment in SCIP, as they rely on other DOD data systems for this information, which they said should feed into SCIP. USASAC’s daily tracking reports contain some information on the arrival dates of ITEF-funded equipment that officials said they obtain from other DOD data systems, but this information is not captured in SCMS and therefore is not readily accessible to DOD program managers. USASAC officials said that they do not track ITEF-funded equipment beyond its arrival to Kuwait or Iraq as they consider shipment complete once it arrives in Kuwait or Iraq and the 1st TSC, in coordination with CJTF-OIR, receives the equipment. According to 1st TSC officials responsible for the receiving, storing, and transporting of ITEF-funded equipment in Kuwait and Iraq, the 1st TSC does not report in SCIP and has no plans to report the arrival dates of ITEF-funded equipment to Kuwait or Iraq because it is not required to do so. These officials said that guidance in the SAMM does not assign specific responsibilities to the 1st TSC, and that the 1st TSC has not been directed by USCENTCOM to implement any policies or procedures outlined in the SAMM. 1st TSC officials said that they use their own internal spreadsheets to account for on-hand quantities of equipment at U.S. staging facilities in Kuwait and Iraq. In March 2017, 1st TSC officials said that they implemented the U.S. Army’s automated Global Combat Support System—a logistics and financial management system that does not feed into SCIP, according to 1st TSC officials—to account for the on- hand quantities of ITEF-funded equipment in Kuwait and Iraq. A September 2016 DOD Inspector General report found that DOD did not have accurate, up-to-date records on the quantity and location of ITEF- funded equipment in Kuwait and Iraq and lacked effective controls for maintaining visibility and accountability of ITEF-funded equipment in Kuwait and Iraq. The DOD Inspector General recommended that the 1st TSC use automated systems to account for and provide complete visibility of ITEF-funded equipment. We did not independently determine the root cause for why the arrival dates of ITEF-funded equipment to Kuwait or Iraq were not consistently captured in SCMS. Without up-to- date information in SCIP’s SCMS on the arrival dates of ITEF-funded equipment, DOD components will not have access to timely and relevant management information at a key stage of the ITEF equipping process. Our analysis of 566 completed ITEF-funded requisitions recorded in SCIP’s SCMS as of February 10, 2017 found that DOD components had not consistently followed the SAMM or a USCENTCOM order to capture the transfer dates of ITEF-funded equipment to the government of Iraq or the Kurdistan Regional Government in phase 3 of the ITEF equipping process in SCIP. Specifically, none of the 566 requisitions included the transfer date of the equipment to the government of Iraq or the Kurdistan Regional Government (see fig. 6). Between August 2016 and April 2017, DOD took steps to report the transfer dates of some ITEF-funded equipment in EFTS as required by the DSCA SAMM; however, DOD officials and contractors have had difficulty locating these dates in EFTS because of a lack of clear procedures for reporting these dates. In August 2016, after we informed OSC-I officials of a reporting requirement in the DSCA SAMM, 1st TSC officials said that they began reporting ITEF-funded equipment transfer dates to the government of Iraq or the Kurdistan Regional Government of previously transferred equipment in EFTS within SCIP. Soon thereafter, in October 2016, USCENTCOM issued an order requiring the 1st TSC, in coordination with a USASAC program manager, to ensure that ITEF- funded equipment transfer information is properly recorded in EFTS. According to 1st TSC officials, in late December 2016, 1st TSC began reporting the transfer dates of any new equipment transfers in EFTS as they occurred, in addition to continuing to report the transfer dates of previously transferred equipment in EFTS. In February 2017, however, when we asked the DSCA contractor responsible for the management of EFTS to provide us with the transfer dates of ITEF-funded equipment that 1st TSC officials said they had reported in EFTS, the contractor could not locate the transfer dates. The DSCA contractor said that EFTS does not contain a dedicated data field for capturing the transfer dates of building partnership capacity materiel, including ITEF-funded equipment, and DSCA has not provided guidance on what data field should be used to capture these dates. As a result, he did not know which data field the 1st TSC had used to report the transfer dates. In addition, our review of the 1st TSC’s written procedures for ensuring the accountability and transferring of ITEF-funded equipment found that they did not specify under which data field ITEF-funded equipment transfer dates should be reported. In April 2017, 1st TSC officials identified the data field in EFTS that they were using to report the transfer dates of ITEF-funded equipment and provided evidence that they had reported transfer dates for about 5,000 ITEF-funded equipment requisitions in EFTS as of March 2017. According to DSCA officials, SCMS should automatically capture all transfer dates of equipment reported in EFTS. DSCA officials responsible for the management of SCMS said that SCMS may not importing the transfer dates from EFTS as intended because of interoperability issues with EFTS. By not capturing the transfer dates of ITEF-funded equipment in SCMS or EFTS, DOD components’ visibility over the amount of ITEF-funded equipment transferred to the government of Iraq is limited. Furthermore, DSCA officials said that SCIP users may need additional guidance for reporting all key transportation dates in SCIP. These officials said that they held a symposium in January 2017 to discuss general interoperability and reporting issues within SCIP and planned to provide additional guidance on the roles of DOD components for reporting data to EFTS and SCMS but did not specify a time frame for doing so. The 1st TSC cannot fully account for ITEF-funded equipment transferred to the government of Iraq or the Kurdistan Regional Government because of missing or incomplete transfer documentation. 1st TSC officials said that they are missing an unknown number of hand-completed U.S. transfer and receipt forms used to document the transfer of ITEF-funded equipment. According to the 1st TSC’s standard operating procedures for ensuring the accountability of ITEF-funded equipment, 1st TSC officials are required to complete a U.S. transfer and receipt form to document the transfer of ITEF-funded equipment to a government of Iraq or Kurdistan Regional Government official. The command developed these procedures in November 2015—about 6 months after ITEF-funded equipment began arriving in Kuwait and Iraq, according to 1st TSC officials—and updated the procedures in April 2016 and November 2016. In January 2017, 1st TSC officials said that they were missing the required U.S. transfer and receipt forms for some equipment transfers, based on their review of the transfer documentation. 1st TSC officials said that they would not know the amount of equipment with missing transfer and receipt forms until they completed their analysis of the documents, which 1st TSC officials and a USASAC program manager located in Kuwait estimated could take until the summer of 2017. Moreover, we found that the majority of the U.S. transfer and receipt forms the 1st TSC had on hand as of April 2016 were not complete. We reviewed all of the U.S. transfer and receipt forms documenting ITEF- funded equipment transfers that the 1st TSC had on hand as of April 2016. Of the 284 U.S. transfer and receipt forms dated between March 2015 and April 2016 that we reviewed, we found that almost all of the forms were signed by a government of Iraq or Kurdistan Regional Government official but more than half of the forms did not contain the date of transfer of the equipment (see fig. 7). The 1st TSC also provided 48 internal memos dated between October 2015 and February 2016 from a 1st TSC official seeking to reconcile discrepancies he found in the documentation, such as missing serial numbers for weapons. In one memo, the official said that the required U.S. transfer and receipt form documenting the transfer of ammunition was missing. 1st TSC officials acknowledged that they did not know whether these forms represented the total number of ITEF-funded equipment items transferred to the government of Iraq or the Kurdistan Regional Government as of April 2016. Without complete transfer documentation, 1st TSC officials cannot accurately determine how much ITEF-funded equipment has been transferred to the government of Iraq or the Kurdistan Regional Government, nor can they ensure that this equipment was transferred to the appropriate foreign official. In addition, we found that most of the transfer documentation lacked case identifier information, which would help ensure that DOD personnel are able to track ITEF-funded equipment throughout the equipment process. Of the 284 U.S. transfer and receipt forms we reviewed, only 95 contained unique case identifier information. The director of the 1st TSC’s equipping team said that the lack of case identifier information on the transfer documentation has significantly slowed his team’s progress in reporting the transfer dates of previously transferred ITEF-funded equipment to EFTS. As a result, the director said that he issued a verbal order in August 2016 requiring 1st TSC personnel to include case identifier information on the transfer and receipt forms documenting the transfer of equipment as well as on DOD orders to move ITEF-funded equipment within Kuwait and Iraq. He said that including the case identifier information would help ensure that 1st TSC personnel could link equipment items with their case information in SCIP. The 1st TSC’s Standard Operating Procedures for ensuring the accountability of ITEF-funded equipment, however, do not include this requirement. The Standards for Internal Control in the Federal Government require management to complete timely reviews of significant changes to an entity’s process and procedures and ensure that the entity’s policies and procedures achieve its objectives. 1st TSC officials said in March 2017 that they would begin the process of updating their Standard Operating Procedures to reflect their recent implementation of the U.S. Army’s Global Combat Support System for accounting for ITEF-funded equipment in late April 2017. 1st TSC personnel rotate in and out of Kuwait and Iraq about every 9 months with the last rotation of 1st TSC personnel having occurred in December 2016, according to 1st TSC officials. Without accurate and up-to-date written procedures, new personnel may not be aware of the verbal order, thus increasing the risk that they will not follow the order and limiting the 1st TSC’s ability to account for the equipment. ISIS continues to be a major threat to both Iraq and Syria and to U.S. interests in the region. The congressional appropriation of $2.3 billion for ITEF in fiscal years 2015 and 2016 has enabled DOD to provide critical equipment to Iraq’s security forces for their counter-ISIS efforts. However, DOD’s ability to maintain visibility and accountability over ITEF-funded equipment remains limited. DOD designed SCIP to help DOD components maintain end-to-end visibility of DOD equipment, including ITEF-funded equipment, but DOD components do not use SCIP as intended because of potential interoperability and data reporting issues within SCIP and other DOD data systems. In addition, missing and incomplete ITEF-funded equipment transfer documentation further affects DOD’s ability to maintain complete visibility and accountability over ITEF- funded equipment. Since 1st TSC personnel rotate about every 9 months, it is essential that the 1st TSC maintain updated standard operating procedures that reflect significant changes to its processes for ensuring the accountability of ITEF-funded equipment, including documenting a verbal order that unique case identifiers be included on transfer documentation so that 1st TSC personnel are able to properly record ITEF-funded transfer dates in SCIP. Without timely and accurate transit information on the status of ITEF-funded equipment, DOD cannot ensure that the equipment has reached its intended destination, nor can DOD program managers conduct effective oversight of the ITEF program. To ensure that DOD program managers have the necessary information to maintain complete visibility and accountability of ITEF-funded equipment in SCIP, we recommend that the Secretary of Defense take the following four actions: 1. Identify the root causes, such as potential interoperability and data reporting issues within SCIP and other DOD data systems, for why DOD components are not ensuring that ITEF-funded equipment transportation dates are captured in SCIP. 2. Develop an action plan with associated milestones and time frames for addressing the root causes for why DOD components are not ensuring that ITEF-funded equipment transportation dates are captured in SCIP. 3. Develop written procedures that specify under which data field ITEF- funded equipment transfer dates should be captured in EFTS in SCIP. 4. Update the 1st TSC’s written standard operating procedures to include the 1st TSC commander’s verbal order requiring the inclusion of unique equipment case identifier information for ITEF-funded equipment on transfer documentation. We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in appendix II, DOD concurred with three recommendations and partially concurred with a fourth recommendation. DOD also provided technical comments, which we incorporated as appropriate. DOD concurred with our first two recommendations, to identify why DOD components are not ensuring that ITEF-funded equipment transportation dates are captured in SCIP and to develop an action plan for addressing these issues. The department commented that it had begun identifying the root causes of the data reporting issues in SCIP and would provide GAO the reasons for these issues within 30 days of the issuance of GAO’s report. The department also commented that it would develop an action plan with a timeline to measure progress in addressing the root causes and would notify GAO when these were addressed. DOD also concurred with the recommendation that the 1st TSC update its written standard operating procedures to include a verbal order requiring the inclusion of unique equipment case identifier information for ITEF-funded equipment transfer documentation. The department said that the 1st TSC planned to update its written procedures to include this verbal order by May 31, 2017. DOD partially concurred with our recommendation that DOD develop written procedures for reporting ITEF-funded equipment transfer dates in EFTS in SCIP. The department commented that the relevant organizations have most, if not all, of the written procedures that are necessary for reporting these dates in EFTS. However, the department said it would coordinate with all interested parties to ensure that the required written procedures exist and to update those documents if needed. GAO continues to believe that additional written procedures are needed and modified the language of our recommendation to specify that DOD include in its written procedures the EFTS data field in which ITEF- funded equipment transfer dates should be captured. Although the 1st TSC has written procedures on how personnel can upload some transfer information into SCIP, the procedures do not clearly state which data field in EFTS should be used to capture the transfer dates of ITEF-funded equipment to the government of Iraq or the Kurdistan Regional Government. As a result, DOD personnel and contractors have had difficulty locating these dates, which 1st TSC officials said they have uploaded to EFTS. Providing clear procedures on the data field to be used to capture ITEF-funded equipment transfer dates would help ensure that DOD personnel responsible for managing ITEF are able to locate these dates in EFTS as needed. We are sending copies of this report to the appropriate congressional committees and to the Secretary of Defense and the Secretary of State. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6991 or farbj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix III. This review examines the extent to which the Department of Defense (DOD) maintains visibility and accountability over Iraq Train and Equip (ITEF)-funded equipment from acquisition through transfer to the government of Iraq or Kurdistan Regional Government. To examine the extent to which DOD maintains visibility and accountability of ITEF-funded equipment, we reviewed DOD guidance in the Defense Security Cooperation Agency (DSCA)’s Security Assistance Management Manual (SAMM), including a requirement to report the transfer dates of building partner capacity materiel in the Enhanced Freight Tracking System (EFTS) within the Security Cooperation Information Portal (SCIP) or to DSCA. We interviewed DOD officials from DOD components responsible for managing or tracking ITEF-funded equipment through different phases of the equipping process to understand their processes for providing visibility over ITEF-funded equipment. These components included the Combined Joint Task Force- Operation Inherent Resolve (CJTF-OIR), U.S. Army Security Assistance Command (USASAC), U.S. Army TACOM Life Cycle Management Command, and the 1st Theater Sustainment Command (1st TSC). We reviewed USASAC, U.S. Army TACOM Life Cycle Management Command, and U.S. Army Joint Munitions Command’s status reports on ITEF-funded equipment. We also interviewed officials from the Department of State’s Office of Security Cooperation-Iraq to understand their role in providing visibility of ITEF-funded equipment. In addition, to determine the extent to which DOD components ensured that SCIP captured key transportation dates of ITEF-funded equipment, we analyzed data from SCIP’s Security Cooperation Management Suite (SCMS) by running two reports from SCMS on February 10, 2017. First, we ran a report to determine the total universe of cases and their corresponding requisitions for which ITEF funds had been obligated. We determined that this universe consisted of 13,674 ITEF- funded equipment requisitions. Second, we ran a report to determine how many of the 13,674 ITEF- funded equipment requisitions had all items and services delivered and performed. Specifically, we ran a report of cases and their corresponding requisitions that DOD had marked in SCMS as “supply/services complete” because DOD considered all of the items and services delivered and performed for these requisitions. Using this designation, we determined that 566 ITEF-funded equipment requisitions were marked as “supply/services complete.” In using the term requisitions, we mean lines of data entries in SCMS by case that contain a unique combination of requisition numbers and/or transportation control numbers. In our analysis, we noted some requisition numbers that applied to multiple lines of entries as well as some transportation control numbers that applied to multiple lines of entries; however, we found no exact duplicates by entry. Our analysis focused on completeness of these records, which is where we found the deficiencies noted in the body of this report. It was beyond the scope of this review to assess the accuracy of the requisition numbers, transportation control numbers, and any dates entered into SCMS. In addition, we were not able to determine the extent to which ITEF-funded equipment cases and their corresponding requisitions were properly marked as “supply/services complete” in SCMS. One reason why DOD only marked 566 of the 13,674 ITEF-funded equipment lines of requisitions as “supply/services complete” could be that DOD had one or more requisitions on a case that had not yet been delivered or performed, which prevented DOD from closing the case and marking all of the requisitions associated with the case as “supply/services complete.” Also, DOD may have had some ITEF-funded equipment cases and corresponding requisitions that should have been marked as “supply/services complete” but were not. We determined that we could proceed with assessing the extent to which the 566 ITEF-funded equipment requisitions that had been marked as “supply/services complete” in SCMS had recorded key transportation dates of equipment in SCMS for each of the three phases of the ITEF equipping process because DOD considered all of the items and services delivered and performed for these requisitions. We selected data fields in SCMS for each requisition that would capture key transportation dates of ITEF- funded equipment in each of the three phases of the ITEF equipping process. These included: “Arrive Port of Embarkation” data field to determine whether SCMS captured the arrival date of equipment at the last point of departure in the United States in phase 1 of the ITEF equipping process, “Depart Port of Embarkation” data field to determine whether SCMS captured the departure date for equipment shipped from the United States in phase 1 of the ITEF equipping process, “Arrive Country” data field to determine whether SCMS captured the arrival date of equipment in Kuwait or Iraq, and “Customer Receipt” data field to determine whether SCMS captured the transfer date of equipment to the government of Iraq or the Kurdistan Regional Government. We verified that our selection and interpretation of these data fields were correct by reviewing DSCA-issued guidance on SCIP and consulting with DSCA officials. In addition, we probed the 1st TSC officials’ assertion that they had reported about 2,000 ITEF-funded equipment requisition transfer dates in SCIP between August 2016 and January 2017 by interviewing the contractor responsible for SCIP’s EFTS, in which these dates would have been reported. We also reviewed additional data from him. It was beyond the scope of this review to determine the extent to which the 13,674 ITEF-funded equipment requisitions had been correctly marked as complete for supply and services, as such an analysis would have required a reconciliation of SCIP computerized records against source documents and other supporting materials. However, we did determine that the 566 requisitions marked as complete for supply and services were lacking key information, which we reported. In the body of this report, we detail how the lack of complete key information means that these data cannot be used to maintain visibility and accountability of ITEF-funded equipment. To determine the extent to which DOD accounts for ITEF-funded equipment transferred to the government of Iraq or the Kurdistan Regional Government, we reviewed 1st TSC transfer documentation. We requested all U.S. transfer and receipt forms that the 1st TSC used to document ITEF-funded equipment transfers to the government of Iraq or the Kurdistan Regional Government and were provided with 284 forms dated between March 2015 and April 2016. We reviewed these 284 forms to check whether they contained key information, such as signatures and unique case identifiers. When determining whether the forms contained transfer dates, we created a decision rule of only counting those dates that were legible because the purpose of this review was to assess DOD’s accountability over the equipment transferred. We were unable to determine whether we were provided with all the equipment transfer and receipt forms for this period because DOD does not maintain the information that would have allowed us to do this. Specifically, the data problems that we have noted in SCIP, in particular the problem of missing entries, prevented us from making that determination. We also reviewed the 1st TSC’s November 2015, April 2016, and November 2016 standard operating procedures for ensuring the accountability of ITEF-funded equipment as well as CJTF-OIR’s June 2016 standard operating procedures for the management of ITEF. In addition, we traveled to Kuwait and Iraq to interview DOD officials from the 1st TSC and CJTF-OIR to understand their roles, responsibilities, and processes for ensuring the accountability of ITEF-funded equipment. We conducted this performance audit from September 2016 to May 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Judy McCloskey (Assistant Director), Kira Self (Analyst in Charge), Ashley Alley, Martin De Alteriis, Lynn Cothern, Neil Doherty, Mattias Fenton, B. Patrick Hickey, and Jeff Isaacs made key contributions to this report. Iraq: Status of DOD Efforts to Train and Equip Iraq’s Security Forces. GAO-17-32C. Washington, D.C.: April 7, 2017. Combating Terrorism: U.S. Footprint Poses Challenges for the Advise and Assist Mission in Iraq. GAO-17-220C. Washington, D.C.: November 22, 2016. Iraq: State and DOD Need to Improve Documentation and Record Keeping for Vetting of Iraq’s Security Forces. GAO-16-658C. Washington, D.C.: September 30, 2016. Countering ISIS: DOD Should Develop Plans for Responding to Risks and for Using Stockpiled Equipment No Longer Intended for Syria Train and Equip Program. GAO-16-670C. Washington, D.C.: September 9, 2016. Defense Logistics: DOD Has Addressed Most Reporting Requirements and Continues to Refine Its Asset Visibility Strategy. GAO-16-88. Washington, D.C.: December 22, 2015. Yemen: DOD Should Improve Accuracy of Its Data on Congressional Clearance of Projects as it Reevaluates Counterterrorism Assistance. GAO-15-493. Washington, D.C.: April 28, 2015. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Countering Overseas Threats: DOD and State Need to Address Gaps in Monitoring of Security Equipment Transferred to Lebanon. GAO-14-161. Washington, D.C.: February 26, 2014. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 2013. Security Assistance: DOD’s Ongoing Reforms Address Some Challenges, but Additional Information is Needed to Further Enhance Program Management. GAO-13-84. Washington, D.C.: November 16, 2012. | In 2014, Congress authorized the creation of ITEF to provide equipment and other assistance to Iraq's security forces, including the Kurdistan Regional Government forces, to counter the expansion of the Islamic State of Iraq and Syria. As of December 2016, DOD had disbursed about $2 billion of the $2.3 billion Congress appropriated for ITEF in fiscal years 2015 and 2016 to purchase, for example, personal protective equipment, weapons, and vehicles for these forces. DOD's web-based SCIP provides U.S. government personnel and others transportation information on DOD equipment imported from other DOD data systems or reported by SCIP users. GAO was asked to review DOD's accountability of ITEF-funded equipment. This report assesses the extent to which DOD maintains visibility and accountability of ITEF-funded equipment from acquisition through transfer to the government of Iraq or the Kurdistan Regional Government. GAO analyzed DOD guidance, procedures, SCIP data, and transfer documentation and interviewed officials from DOD agencies with a role in the ITEF equipping process in the United States, Kuwait, and Iraq. The Department of Defense (DOD) maintains limited visibility and accountability over equipment funded by the Iraq Train and Equip Fund (ITEF). Specifically, DOD is not ensuring that the Security Cooperation Information Portal (SCIP) is consistently capturing key transportation dates of ITEF-funded equipment. DOD guidance states that DOD components should use SCIP to identify the status and track the transportation of all building partner capacity materiel, such as ITEF. DOD also issued an order in October 2016 requiring DOD components to ensure that equipment transfer dates are recorded in SCIP. The process for providing the equipment to Iraq's security forces generally falls into three phases: (1) acquisition and shipment, (2) staging in Kuwait and Iraq, and (3) transfer to the government of Iraq or the Kurdistan Regional Government. However, for the 566 ITEF-funded requisitions marked as complete in SCIP's management reporting system as of February 2017, GAO found that the system captured one of two key transportation dates for 256 of the requisitions in phase 1, and none of the transportation dates for these requisitions in phase 2 or phase 3 (see figure). DOD officials attributed the lack of key transportation dates in SCIP's management reporting system to potential interoperability and data reporting issues in all three equipping phases. Interoperability issues. DOD officials said that SCIP's management reporting system may not be importing transportation data correctly from other DOD data systems or from another shipment tracking system feature in SCIP. Data reporting issues. DOD officials said they are not reporting the arrival dates of equipment to Kuwait or Iraq because they rely on other DOD data systems and are not required to do so. DOD officials said they have had difficulty ensuring that SCIP has captured equipment transfer dates. In addition, DOD cannot fully account for ITEF-funded equipment transfers because of missing or incomplete transfer documentation. Without timely and accurate transit information, DOD cannot ensure that the equipment has reached its intended destination, nor can program managers conduct effective oversight of ITEF-funded equipment. GAO is making four recommendations that include identifying the root causes for addressing why DOD is not capturing ITEF-funded equipment transportation dates in SCIP and developing an action plan to address these issues. DOD generally concurred with GAO's recommendations and stated that it would develop a plan. |
The Homeland Security Act of 2002 provides the basis for DHS responsibilities in the protection of the nation’s critical infrastructure. The act assigns DHS responsibility for developing a comprehensive national plan for securing critical infrastructure and for recommending the measures necessary to protect the key resources and critical infrastructure of the United States in coordination with other agencies and in cooperation with state and local government agencies and authorities, the private sector, and other entities. Other legislation enacted over the last decade has produced major changes in the nation’s approach to maritime security. Much of the federal framework for port security is contained in the Maritime Transportation Security Act of 2002 (MTSA). The MTSA establishes requirements for various layers of maritime security, including requiring a national security plan, area security plans, and facility and vessel security plans. DHS has placed some responsibility for this and other MTSA requirements with the Coast Guard. In October 2006, the Security and Accountability for Every Port Act of 2006 (SAFE Port Act) further refined the nation’s port security framework, For creating and codifying certain port security programs and initiatives.example, the SAFE Port Act required the development of protocols for resumption of trade following a transportation security incident, as well as Salvage Response Plans. DHS emphasizes the importance of resilience through key documents like the NIPP, QHSR, and directives. As the lead federal agency for the Marine Transportation System, the Coast Guard is responsible for facilitating the recovery of the system following a significant transportation disruption and working with maritime stakeholders in the resumption of trade. The Coast Guard is also the Sector-Specific Agency (SSA) for the Maritime subsector of the Transportation sector and coordinates the preparedness activities among the sector’s partners to prevent, protect against, respond to, and recover from all hazards that could have a debilitating effect on homeland security, public health and safety, or economic well-being. IP is responsible for working with public and private sector critical infrastructure partners and leads the coordinated national effort to mitigate risk to the nation’s critical infrastructure. IP also has the overall responsibility for coordinating implementation of the NIPP across 18 critical infrastructure sectors; overseeing the development of 18 Sector- Specific Plans; providing training and planning guidance to SSAs and owners and operators on protective measures to assist in enhancing the security of critical infrastructure within their control; and helping state, local, tribal, territorial, and private sector partners develop the capabilities to mitigate vulnerabilities and identifiable risks to their assets. IP’s Protective Security Coordination Division provides programs and initiatives to enhance critical infrastructure protection and resilience and reduce risk associated with all-hazards incidents. To carry out these responsibilities, IP has deployed PSAs in 50 states and Puerto Rico, with deployment locations based on population density and major concentrations of critical infrastructure. One PSA duty is to coordinate and conduct voluntary assessment services to assist critical infrastructure owners and operators in reviewing and strengthening their security posture. Specifically, PSAs coordinate and carry out various IP protective programs such as the Enhanced Critical Infrastructure Protection (ECIP) initiative, which is a voluntary program focused on forming or maintaining partnerships between DHS and critical infrastructure owners and operators of high-priority assets and systems, as well as other assets of significant value. The PSAs also coordinate and participate in Site Assistance Visit vulnerability assessments to identify security gaps and provide options for consideration to mitigate these identified gaps. These assessments are on-site, asset-specific, nonregulatory assessments conducted at the request of asset owners and operators. Port operations involve a complicated system of systems, which operates across multiple sectors. The port area consists of many assets that are interdependent with other sectors, such as power and water, to continue normal operations. For example, container terminals have large energy needs to operate the cranes that load and unload cargo. In most cases, backup generators cannot produce enough power to keep these cranes operational, so reliable energy production and transportation are vital to maintaining normal port operations. Similarly, refinery, chemical plant, cruise line, ferry, and other port operations also have high energy and water needs. In addition, many port operations rely heavily upon trucking and rail transportation to move personnel and cargo in and out of the port area. Furthermore, the availability of a functional labor force and information technology support—which may be located within or outside of a port area—is important for port stakeholders’ operations. Similarly, many businesses and communities rely on the port for their normal operations. Energy, food, and product shipments are vital to port operations, port stakeholders, and the broader community. Interruptions in the supply chain often have secondary and tertiary impacts that may not be immediately obvious to businesses and communities. Figure 1 illustrates some of the key stakeholders within a port and the importance of their interactions. Understanding the interdependencies among various port area stakeholders and other critical partners outside the port area is necessary to ensure and enhance a port’s resilience. National high-level documents currently promote resilience as a key national goal. Specifically, two key White House documents emphasize resilience on a national level—Presidential Policy Directive 8 (PPD-8) and the National Strategy for Global Supply Chain Security. PPD-8 defines resilience as the ability to adapt to changing conditions and withstand and rapidly recover from disruption due to emergencies. The National Strategy for Global Supply Chain Security endorses building a layered defense, addressing threats early, and fostering a resilient system that can absorb and recover rapidly from unanticipated disruptions. Key federal entities, including DHS, are currently working to develop frameworks or other strategies for implementing the goals and objectives of these documents, which should provide greater insights into how they plan to strengthen national resilience. Since 2009, DHS has also emphasized the concept of resilience through two high-level documents—the NIPP and QHSR. The NIPP identifies resilience as a national objective for critical infrastructure protection and defines resilience as the ability to resist, absorb, recover from, or successfully adapt to adversity or a change in conditions. The QHSR identifies ensuring resilience to disaster as one of the nation’s five homeland security missions.resilience as fostering individual, community, and system robustness, adaptability, and capacity for rapid recovery. According to DHS, resilience is one of the foundational elements for a comprehensive approach to homeland security; thus, its missions and programs designed to enhance national resilience span the department. Accordingly, DHS is currently developing a policy to bring a cohesive understanding of resilience to its components and establish resilience objectives. DHS took steps to foster departmentwide resilience initiatives by creating two internal entities—the Resilience Integration Team (RIT) and the Office of Resilience Policy (ORP). In April 2010, DHS formed RIT to develop new initiatives that support the overarching resilience mission set forth in the QHSR. To date, RIT has been the key DHS-wide working group charged with developing and disseminating resilience concepts.According to agency officials, RIT brings together subject matter experts from all components whose missions affect resilience in some manner for monthly meetings. DHS formed ORP in March 2012 to coordinate and promulgate resilience strategies throughout the department. In 2010, RIT officials surveyed components about how their activities addressed resilience in an attempt to gauge components’ understanding of resilience, as discussed in the QHSR. According to RIT officials, component responses showed that component resilience actions were very diverse and represented stovepiped efforts that were still “works in progress.” ORP officials told us that these differing approaches to implementing and identifying resilience efforts were part of the reason they saw a need for one DHS resilience policy. Specifically, ORP saw a need to establish a policy that provides component agencies with a single, consistent, departmentwide understanding of resilience; clarifies and consolidates the concepts from the four high-level guiding documents discussed above; and helps components understand how their activities address DHS’s proposed resilience objectives. The policy is currently in draft status, and ORP officials hope to have an approved policy in place later this year. Although DHS is developing a policy to establish a departmentwide resilience framework, DHS officials stated that they currently have no plans to develop an implementation strategy for DHS’s resilience policy. An implementation strategy that defines goals, objectives, and activities could help ensure that the policy is adopted consistently and in a timely manner by components, and that all components share common priorities and objectives. Additionally, an implementation strategy with specific milestones could help hold ORP and DHS components accountable for taking actions to address resilience objectives identified in the new policy in a timely manner. ORP officials acknowledged that an implementation strategy could be beneficial because it could provide concrete steps for employing DHS’s new resilience policy and harmonizing component efforts. In previous work, we identified key characteristics that should be included in a strategy, as discussed below. Goals, subordinate objectives, activities, and performance measures set clear desired results and priorities, specific milestones, and outcome-related performance measures while giving implementing parties flexibility to pursue and achieve those results within a reasonable time frame. Organizational roles, responsibilities, and mechanisms for coordinating their efforts identify the relevant departments, components, or offices and, where appropriate, the different sectors, such as state, local, private, or international sectors. The strategy would also clarify implementing organizations’ relationships in terms of leading, supporting, and partnering. Resources, investments, and risk management identify, among other things, the sources and types of resources and investments associated with the strategy, and where those resources and investments should be targeted. As DHS implements its resilience policy, an implementation strategy with these characteristics could provide ORP with a clear and more complete picture of how DHS components are implementing this policy, as well as how the various programs and activities are helping to enhance critical infrastructure resilience in their areas of responsibility. For example, establishing desired results and priorities, such as departmentwide resilience objectives, could help components better understand and communicate how their actions and strategies fulfill those policy objectives. It could also help ORP maintain awareness of various component actions and how these actions align with the policy while also helping components identify which actions are most critical to addressing these objectives. Additionally, milestones could help to ensure that ORP is receiving timely input from components regarding their actions to address resilience objectives, and help ORP and components determine whether adjustments to the policy are needed. Furthermore, as part of the strategy, developing performance measures, such as the number of components that have reported back on resilience efforts, would help provide ORP with more complete information for gauging the level of component acceptance of the policy and understanding of how components’ actions address resilience objectives. Moreover, identifying relevant government entities and implementing organizations could provide components with clear expectations for collaborating with other partners inside and outside of DHS, and reporting this collaboration back to ORP. This step could also clearly define departmental components responsible for promoting resilience by identifying critical stakeholders and subject matter experts within and outside of DHS. Moreover, clarifying relationships among components, other government entities, and private partners could foster a greater understanding of their dependence on one another and provide valuable perspective for ORP. Finally, identifying the types of resources and investments needed, and where they should be targeted, could help provide guidance to the implementing components to manage resources and lead them to consider where resources should be invested now and in the future based on balancing risk reductions and costs. ORP officials stated that they have focused initial efforts on developing the resilience policy, and had not given consideration to developing an implementation strategy for this policy. Going forward, we believe that focusing efforts on developing an implementation strategy that includes the elements we identified could benefit DHS components’ efforts to enhance resilience. The Coast Guard works with asset owners and operators to assess and enhance various aspects of port critical infrastructure resilience—such as security protection, port recovery, and risk analysis efforts, as described in table 1. In general, officials from the seven Coast Guard sectors we interviewed and various industries at the three ports we visited cited the efforts depicted in table 1 as helpful in addressing or raising awareness of resilience-related issues (e.g., port security and recovery). Their views on the value of some of these key efforts are summarized below. AMSCs. Coast Guard officials we met with at each of the seven sectors stated that they maintain working relationships with port stakeholders via the AMSCs and other groups, which provide a forum for regular communication among port stakeholders on issues related to security and recovery—key elements of resilience. At the three ports we visited, industry stakeholders also cited the importance of the AMSCs in raising awareness of security or resilience issues. In addition, our prior work has illustrated the importance of AMSCs in facilitating information sharing at the port level. One example of Coast Guard efforts to promote resilience at the local level through the AMSC is occurring at Sector Delaware Bay. Coast Guard officials there reported working with members of the local maritime exchange to develop a guide to business continuity planning—an important element in enhancing resilience. According to sector officials, the guide was developed to assist smaller businesses in the port area that lacked the capability or funds to develop a business continuity plan in- house. Delaware Bay officials reported that they have shared this template with other Coast Guard sectors as well. Port security exercises. Officials from six of the seven sectors and industry officials at the three ports we visited cited the importance of addressing recovery and resilience planning issues through various training exercises, whether sponsored by the Coast Guard or other entities. For example, officials in one Coast Guard sector spoke about the importance of a training exercise focused on waterway recovery in getting intermodal stakeholders (such as container terminal operators) to think beyond impacts on their own facilities and consider the resilience of the port area as a whole (e.g., how the port would meet the needs of partners dependent on its shipping services). PSGP. Officials at five of the seven Coast Guard sectors—as well as industry stakeholders at the three ports we visited—cited the PSGP as an important means of addressing risk management and resilience issues in port areas. For example, one river pilots’ association reported that it used PSGP funds to expand the use of a radar system for tracking vessels and provided access to the information to the Coast Guard, police, and other authorities. Thus, this system could both increase portwide awareness and aid in recovery efforts following an incident. In addition, officials at four Coast Guard sectors, as well as industry stakeholders, pointed to the PRMP as helpful in identifying security gaps and priorities to be addressed. MSRAM. Coast Guard officials have stated that, as part of the evolution of MSRAM, it is taking preliminary steps to make MSRAM more helpful in assessing resilience. Specifically, the agency is considering ways to use MSRAM data and other tools to help mitigate the criticality or risk levels of key critical infrastructure while also improving its estimates of secondary economic impacts of an event. According to MSRAM program officials, these efforts are in very early stages. While not focused specifically on ports, IP assists critical infrastructure owners and operators of individual assets throughout the nation in understanding their own level of resilience through voluntary assessments and surveys. IP also conducts assessments of regional resilience in some areas of the country. As discussed earlier, IP employs voluntary assessments and security surveys aimed at helping these owners and operators identify and potentially address vulnerabilities, among other things. In addition, IP has two key efforts designed to help enhance resilience—its Resilience Index/Assessment Methodology and Regional Resiliency Assessment Program (RRAP), described below. Resilience Index/Assessment Methodology. IP has developed a Resilience Index for its vulnerability assessments and security surveys. This index is intended to provide the levels of resilience at critical infrastructure, guide prioritization of resources for improving critical infrastructure, and also provide information to owners/operators about their facility’s standing relative to those of similar sector assets and how they may increase resilience. IP is also in the process of developing a new Resilience Assessment Methodology to improve DHS’s ability to assess asset-level resilience, inform regional resilience efforts, and measure progress in enhancing resilience. RRAP. These assessments are conducted to assess vulnerability to help improve resilience and allow for an analysis of infrastructure “clusters” and systems in various regions. This program, which uses vulnerability assessments and surveys, along with other tools, has included ports as a transportation hub element of a larger regional analysis, but has not yet been applied to focus solely on a port. The RRAP evaluates critical infrastructure on a regional level to examine vulnerabilities, threats, and potential consequences from an all- hazards perspective to identify dependencies, interdependencies, cascading effects, resilience characteristics, and gaps. For example, an RRAP review could involve compiling information from reviews of critical infrastructure assets—such as electricity providers and transport companies—to form an overall assessment of a key transportation and energy corridor within a state. The RRAP assessments are conducted by DHS officials, including PSAs in collaboration with SSAs; other federal officials; state, local, territorial, and tribal officials; and the private sector, depending upon the sectors and assets selected as well as a resilience subject matter expert or experts. The results of the RRAP are to be used to enhance the overall security posture of the assets, surrounding communities, and the geographic region covered by the project. According to DHS officials, the results of specific asset-level assessments conducted as part of the RRAP are made available to asset owners and operators and other partners (as appropriate), but the final analysis and report are delivered to the state where the RRAP occurred. Further, according to DHS, while it continues to perform surveys and assessments at individual assets, prioritizing efforts to focus on regional assessments allows DHS to continue to meet evolving threats and challenges. IP officials also informed us that through the RRAPs, the focus of its vulnerability assessment efforts has evolved over the years from a single- facility assessment to an approach that integrates the results of multiple single-facility assessments to inform a regional analysis of resilience and security through the study of dependencies and interdependencies between and among asset operators. IP officials stated that the Coast Guard participates in RRAPs that include a maritime component. The officials have also informed us that the results of Coast Guard reports and assessments are included in the Resiliency Assessment (the RRAP final report) for RRAPs that include a maritime component, and the information is appropriately derived to alleviate any information-sharing concerns. IP also reports that it has done some ECIPs/Site Assistance Visits at facilities associated with ports (e.g., refineries, storage facilities, and marine terminals). In addition, officials we spoke with from four Coast Guard sectors and PSAs representing five areas report maintaining relationships with one another through the AMSCs or other venues to facilitate information sharing. While the Coast Guard and IP have collaborated on some regional resilience assessments, there may be opportunities for further collaboration and use of existing tools to conduct portwide resilience assessment efforts. For example, IP and the Coast Guard could leverage some of the expertise and tools discussed above—such as the RRAP approach—to develop assessments of the overall resilience of one or more specific port areas. Currently, many of the Coast Guard’s formal security assessments (i.e., facility security plan reviews and MSRAM) are focused on asset-level security. For example, our prior work on MSRAM demonstrates that this tool assesses security risks to individual assets, not regions or systems of assets. In addition, the facility security plan reviews are not voluntary, but are conducted to fulfill regulatory requirements. In contrast, IP’s RRAP allows for a broader, more systemic analysis of resilience, and industry provides information to IP on a voluntary basis. IP officials stated that IP has not conducted any RRAPs focused exclusively on ports, and does not intend to, because of the Coast Guard’s role as lead agency for ensuring port safety, recovery planning, and security, and because IP has limited resources for conducting additional RRAPs. However, IP has conducted RRAPs of regional corridors that have a nexus to a port or waterside critical infrastructure assets. For example, one recent RRAP review focused on a regional transportation and energy corridor and discussed the critical importance of a local port in providing fuel, medicine, and other “life-sustaining” goods throughout the state. The report found, among other things, that the port had no emergency power- generating capability; thus, a disruption to the power grid supporting port operations could seriously affect distribution of these life-sustaining goods to state residents. The report recommended that the port work to establish an agreement with another local entity to secure emergency power supplies. This work illustrates the potential vulnerabilities—and mitigation steps—that could be identified through a port-focused resilience review. In addition, NIAC supports further use of RRAPs, reporting that the RRAP is viewed in the field as a “model of collaboration” in understanding regional and community resilience and recommended that its use be expanded “as quickly as feasible.” ORP officials have also stated that having Coast Guard and IP leverage resources and collaborate on systematic portwide resilience assessments could be beneficial. In addition, during the course of our review, we learned of a state-led, ongoing effort to assess portwide resilience at one port area that could prove to be an example of beneficial collaboration that enhances the understanding of port resilience. The New Jersey Office of Homeland Security and Preparedness is leading an effort to develop a computer- based decision support tool that could model the impacts of various disruptions on all critical infrastructure owners and operators within the New York/New Jersey port area. The project team—in collaboration with federal, state, local, and private stakeholders—is examining data from critical facilities and prior assessments to develop decision-making tools to model various scenarios. In addition, according to involved officials, the model is designed to be expandable and transferrable to other ports. Project officials stated that cooperation by critical industry stakeholders has been a key factor in the project’s development so far. These officials stated that they hope to develop three key tools: (1) a decision support tool that identifies port area vulnerabilities; (2) a port recovery and resumption-of-trade plan that helps to develop strategic issues to be addressed; and (3) a compendium of specific recommendations in the area of resilience, some aimed at specific facilities, some requiring portwide cooperation to address. Various stakeholder groups have noted that in addition to the development of tools to enhance resilience, collaboration among partners is also key because of the expertise that each party can contribute to a better understanding of resilience. For example, NIAC and the State, Local, Tribal, and Territorial Government Coordinating Council have reported on a general lack of understanding by state and local community partners of the nature of interdependencies among infrastructure sectors and across communities. Both organizations recommended that IP take a lead role in developing tools and techniques that could help community partners at the state and local levels identify and assess infrastructure interdependencies. We have reported in the past on how collaborating agencies can better identify and address needs by leveraging one another’s’ resources to obtain additional benefits that would not be available if they were working separately.also states that program management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving Standards for Internal Control in the Federal Government its goals. Thus, a collaborative effort between the Coast Guard and IP to assess portwide resilience—leveraging tools and assessment approaches developed by either component, which could include MSRAM and the RRAP—could yield benefits. Specifically, the Coast Guard’s assessments of port/maritime assets coupled with IP’s assessments of other critical infrastructure with a port nexus could lead to a better understanding of the interdependencies critical to keeping a port operational. DHS officials have stated that any collaborative efforts to assess portwide resilience must take into account the difference between the Coast Guard’s regulatory and IP’s voluntary missions. For example, certain information gathered by IP from industry through voluntary assessments, surveys, or programs such as RRAP cannot be shared with the Coast Guard (or other federal entities) for regulatory purposes, though it can be shared for conducting other types of analyses, such as port security reviews.structuring any such collaboration, DHS would have to protect such information. DHS’s support for enhancing resilience is already evident in IP’s voluntary assessments, as well as DHS’s involvement in and endorsement of the New York/New Jersey port area project. Identifying opportunities to leverage tools and resources to collaboratively conduct portwide resilience assessments could enhance stakeholders’ understanding of interdependencies with other port partners, and help to focus scarce resources to enhance resilience for the port area. This understanding is important to maintaining port operations, thus minimizing the potential adverse economic impact on the U.S. economy in the event of a disruption in port operations. DHS has taken initial steps to emphasize the concept of resilience among its components by developing a resilience policy. This has been an important step and is appropriately intended to provide component agencies with a single, consistent, departmentwide understanding of resilience. Developing an implementation strategy for this new policy is the next key step that could help strengthen DHS’s resilience efforts. For example, an implementation strategy that identifies goals and objectives could help DHS components to identify, among other things, the actions that are most critical to addressing DHS’s policy objectives. Similarly, an implementation strategy that identifies responsible entities and their roles, as well as specific milestones and performance measures, could provide components with clear expectations for collaborating with other partners, and enhance DHS’s awareness of components’ understanding and implementation of the policy. This collective information, in turn, would allow DHS to better assess the progress being made by its components in addressing DHS resilience objectives. At the port level, U.S. ports, waterways, and vessels are part of a major economic engine, and a significant disruption to this system could have a widespread impact on the U.S. economy, as well as global shipping, international trade, and the global economy. Coast Guard and IP actions have addressed some aspects of critical infrastructure resilience, but the Coast Guard and IP could take additional action to enhance their collaboration and use existing tools and resources to promote portwide resilience. For example, IP and the Coast Guard could leverage existing expertise and tools—such as IP’s RRAP approach—to develop assessments of the overall resilience of one or more port areas. Having relevant agencies collaborate and leverage one another’s resources to conduct joint portwide resilience assessments could further all stakeholders’ understanding of interdependencies with other port partners, and better direct scarce resources to enhance port resilience. To better ensure consistent implementation of and accountability for DHS’s resilience policy, we recommend that the Secretary of Homeland Security direct the Assistant Secretary for Policy to develop an implementation strategy for this new policy that identifies the following characteristics and others that may be deemed appropriate: steps needed to achieve results, by developing priorities, milestones, and performance measures; responsible entities, their roles compared with those of others, and mechanisms needed for successful coordination; and sources and types of resources and investments associated with the strategy, and where those resources and investments should be targeted. To allow for more efficient efforts to assess portwide resilience, the Secretary of Homeland Security should direct the Assistant Secretary of Infrastructure Protection and the Commandant of the Coast Guard to look for opportunities to collaborate to leverage existing tools and resources to conduct assessments of portwide resilience. In developing this approach, DHS should consider the use of data gathered through IP’s voluntary assessments of port area critical infrastructure or regional RRAP assessments—taking into consideration the need to protect information collected voluntarily—as well as Coast Guard data gathered through its MSRAM assessments, and other tools used by the Coast Guard. We provided a draft of this report to the Secretary of Homeland Security for review and comment. In its written comments reprinted in appendix I, DHS concurred with both of our recommendations. With regard to our first recommendation, that DHS develop an implementation plan for its forthcoming resilience policy, DHS stated that while its RIT has worked to draft a resilience policy including findings and policy statements from key strategic documents such as the QHSR, the department has yet to commence developing an implementation strategy. DHS also noted that it has undertaken a range of activities that support resilience and that further avenues—such as an implementation strategy—are under consideration. Developing an implementation strategy for its resilience policy that addresses the steps needed to achieve results; identifies entities responsible for implementing the policy, their roles, and coordination mechanisms; and determines the resources and investments associated with the strategy would address the intent of our recommendation. With regard to our second recommendation, that DHS seek opportunities for IP and the Coast Guard to collaborate in assessing portwide resilience, DHS stated that the two components would work with ORP in defining their roles in contributing to port resilience. DHS also stated that the RIT would create a subcommittee this fiscal year to provide a forum for discussing the harmonization of resilience activities and programs across DHS. These proposed actions appear to be positive steps in enhancing IP and Coast Guard collaboration that would address the intent of this recommendation. DHS provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Homeland Security, applicable congressional committees, and other interested parties. This report is also available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-9610 or caldwells@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Stephen L. Caldwell, (202) 512-9610 or CaldwellS@gao.gov. In addition to the contact named above, Dawn Hoff, Assistant Director, and Adam Couvillion, Analyst-in-Charge, managed this assignment. Adam Anguiano, Michele Fejfar, Eric Hauswirth, Tracey King, and Jessica Orr made significant contributions to the work. Maritime Security: Progress and Challenges 10 Years after the Maritime Transportation Security Act. GAO-12-1009T. Washington, D.C.: September 11, 2012. Critical Infrastructure Protection: DHS Could Better Manage Security Surveys and Vulnerability Assessments. GAO-12-378. Washington, D.C.: May 31, 2012. Maritime Security: Coast Guard Efforts to Address Port Recovery and Salvage Response. GAO-12-494R. Washington, D.C.: April 6, 2012. Coast Guard: Security Risk Model Meets DHS Criteria, but More Training Could Enhance Its Use for Managing Programs and Operations. GAO-12-14. Washington, D.C.: November 17, 2011. Port Security Grant Program: Risk Model, Grant Management, and Effectiveness Measures Could Be Strengthened. GAO-12-47. Washington, D.C.: November 17, 2011. Critical Infrastructure Protection: DHS Has Taken Action Designed to Identify and Address Overlaps and Gaps in Critical Infrastructure Security Activities. GAO-11-537R. Washington, D.C.: May 19, 2011. Maritime Security: DHS Progress and Challenges in Key Areas of Port Security. GAO-10-940T. Washington, D.C.: July 21, 2010. Critical Infrastructure Protection: DHS Efforts to Assess and Promote Resiliency Are Evolving but Program Management Could Be Strengthened. GAO-10-772. Washington, D.C.: September 23, 2010. Critical Infrastructure Protection: Update to National Infrastructure Protection Plan Includes Increased Emphasis on Risk Management and Resilience. GAO-10-296. Washington, D.C.: March 5, 2010. Maritime Security: The SAFE Port Act: Status and Implementation One Year Later. GAO-08-126T. Washington, D.C.: October 30, 2007. Port Risk Management: Additional Federal Guidance Would Aid Ports in Disaster Planning and Recovery. GAO-07-412. Washington, D.C.: March 28, 2007. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: December 15, 2005. Maritime Security: New Structures Have Improved Information Sharing, but Security Clearance Processing Requires Further Attention. GAO-05-394. Washington, D.C.: April 15, 2005. | U.S. ports are part of an economic engine handling more than $700 billion in merchandise annually, and a disruption to port operations could have a widespread impact on the global economy. DHS has broad responsibility for protection and resilience of critical infrastructure. Within DHS, the Coast Guard is responsible for the maritime environment, and port safety and security, and IP works to enhance critical infrastructure resilience. Recognizing the importance of the continuity of operations in critical infrastructure sectors, DHS has taken initial steps to emphasize the concept of resilience. GAO was asked to review port resilience efforts. This report addresses the extent to which (1) DHS has provided a road map or plan for guiding resilience efforts, and (2) the Coast Guard and IP are working with port stakeholders and each other to enhance port resilience. To address these objectives, GAO analyzed key legislation and DHS documents and guidance. GAO conducted site visits to three ports, selected based on geography, industries, and potential threats; GAO also interviewed DHS officials and industry stakeholders. Information from site visits cannot be generalized to all ports, but provides insights. The Department of Homeland Security (DHS) is developing a resilience policy, but an implementation strategy is a key next step that could help strengthen DHS resilience efforts. DHS defines resilience as the ability to resist, absorb, recover from, or adapt to adversity, and some high-level documents currently promote resilience as a key national goal. Specifically, two key White House documents emphasize resilience on a national level--the 2011 Presidential Policy Directive 8 and the 2012 National Strategy for Global Supply Chain Security. Since 2009, DHS has emphasized the concept of resilience and is currently in the process of developing a resilience policy, the initial steps of which have included creating two internal entities--the Resilience Integration Team and the Office of Resilience Policy (ORP). According to ORP officials, they saw a need to establish a policy that provides component agencies with a single, consistent, departmentwide understanding of resilience that clarifies and consolidates resilience concepts from high-level guiding documents, and helps components understand how their activities address DHS's proposed resilience objectives. ORP officials hope to have an approved policy in place later this year. However, DHS officials stated that currently there are no plans to develop an implementation strategy for this policy. An implementation strategy that defines goals, objectives, and activities; identifies resource needs; and lays out milestones is a key step that could help ensure that DHS components adopt the policy consistently and in a timely manner. For example, an implementation strategy with goals and objectives could provide ORP with a more complete picture of how DHS components are implementing this policy. The Coast Guard and the Office of Infrastructure Protection (IP) work with stakeholders to address some aspects of critical infrastructure resilience, but they could take additional collaborative actions to promote portwide resilience. The Coast Guard is port focused and works with owners and operators of assets, such as vessels and port facilities, to assess and enhance various aspects of critical infrastructure resilience in ports--such as security protection, port recovery, and risk analysis efforts. In contrast, IP, through its Regional Resiliency Assessment Program (RRAP), conducts assessments with a broader regional focus, but is not port specific. An RRAP assessment is conducted to assess vulnerability to help improve resilience and allow for an analysis of infrastructure "clusters" and systems in various regions--for example, a regional transportation and energy corridor. The Coast Guard and IP have collaborated on some RRAP assessments, but there may be opportunities for further collaboration to conduct port-focused resilience assessments. For example, IP and the Coast Guard could collaborate to leverage existing expertise and tools--such as the RRAP approach--to develop assessments of the overall resilience of specific port areas. Having relevant agencies collaborate and leverage one another's resources to conduct joint portwide resilience assessments could further all stakeholders' understanding of interdependencies with other port partners, and help determine where to focus scarce resources to enhance resilience for port areas. GAO recommends that DHS develop an implementation strategy for its resilience policy and that the Coast Guard and IP identify opportunities to collaborate to leverage existing tools and resources to assess port resilience. DHS concurred with GAO's recommendations. |
During the storage and distribution of the billions of pounds of food consumed annually in the United States, some food is damaged or contaminated because of mishandling, accidents (e.g., fires, explosions, or truck and train accidents), or natural and man-made disasters (e.g., earthquakes, hurricanes, floods, or riots). Food that is adulterated or contaminated is generally destroyed. However, if the food is determined to be safe, it may be salvaged and “reconditioned” for consumption. Both FDA and the U.S. Department of Agriculture (USDA) are responsible for ensuring that all food shipped or received in interstate commerce is safe for consumption. FDA enters into contracts or initiates cooperative agreements with state authorities to inspect food manufacturers and warehouses, including operations to salvage food. According to FDA officials, state and local authorities are the most effective regulatory bodies for monitoring such operations because (1) FDA has no authority to place an embargo on hazardous food; (2) the states have intensive regulatory coverage of food warehouses and retail establishments, where most food salvaging operations occur; and (3) FDA has concentrated its resources on issues that pose a higher risk to public health, such as monitoring the blood supply and the safety of medical devices. USDA directly monitors meat and poultry salvaging operations using its own inspectors or designates states to perform inspections when they have inspection programs that meet requirements at least equal to federal laws. When a major disaster occurs, states may contact FDA and/or USDA field offices for assistance and advice. However, FDA’s operational procedures state that in unusual circumstances, such as those involving the interstate movement of merchandise or areas in which state or local political ramifications are anticipated, FDA may assume the primary role in overseeing salvaging operations. On December 28, 1991, a major disaster occurred when a fire began in a storage cave of approximately 100 acres owned by Americold Services Corporation in Kansas City, Kansas. This man-made limestone cave is the largest underground food storage facility in the world, with freezers, coolers, and dry storage areas accessible by truck and rail. Figure 1 shows the layout of the Americold cave, including the location of the fire. When the fire began, about 245 million pounds of food was stored in the cave. Of that amount, about 159 million pounds was owned by about 110 private food companies; USDA owned the remaining 86 million pounds. The products stored in the cave included dry milk, cheese, butter, fruit, nuts, and other dry goods, as well as canned and frozen meats, vegetables, and fruits. The fire started in an area of the cave containing grocery items, including cleaning compounds, pesticides, paper goods, and cooking oil. The fire reached temperatures approaching 2,000 degrees Fahrenheit and, despite continuous fire-fighting efforts, burned for about 2 months. (See fig. 2.) The fire was confined to one section of the cave, but smoke flowed throughout the cave, exposing food to smoke residue for a prolonged period. According to FDA, this event was unique in that no other fire had involved such a large quantity of food that was exposed to smoke for such a long time. Following the fire, the Kansas Department of Health and the Environment (KDHE) met with FDA and other federal, state, and local agencies to determine a course of action for protecting the public health and supervising the salvaging operations. It was decided that KDHE should take the lead in overseeing the salvaging, with assistance from FDA’s district office in Kansas City. Such an arrangement is typical in routine salvaging operations. According to FDA’s records, contaminants found in the air and on surfaces in the cave included toluene, benzene, and phenol—substances cited by the Environmental Protection Agency as being carcinogenic and causing genetic changes and mutations. Because of the potential risk to public health from these contaminants, KDHE, with advice from FDA, placed an embargo on all of the stored food. The embargo was to continue until the owners of the food presented KDHE with evidence, based on laboratory analysis, that the food was suitable for consumption. In many instances, ownership of the food transferred to insurance companies and, ultimately, to food salvagers. The insurers and salvagers were eager to begin salvaging operations and, according to KDHE officials, placed pressure on KDHE to release the food. The salvaging operations began almost immediately and continued for over 2 years. Table 1 summarizes the final disposition of the food stored in the cave. Over 143 million pounds of food was sent to landfills to be destroyed, and about 102 million pounds was released for reconditioning and consumption. Most of the 102 million pounds of food salvaged from the fire was released to the public with little apparent controversy. However, in December 1993, about 2 years after the fire began, a series of articles in the Kansas City Star raised questions about the release of food to a Minnesota food salvager. About 3.7 million pounds of food was shipped to this salvager, and all but about 100,000 pounds was eventually sold to the public. Appendix I provides a chronology of the key events in the release of the food to this salvager. Our review of food salvaging activities following the fire—particularly those involving the shipment of food to Minnesota—found two problems from which lessons can be learned to improve future salvaging operations. First, FDA did not adequately share information with KDHE about past problems it had experienced with a food owner’s consultant and his laboratories. This consultant’s laboratory test results were used to demonstrate to KDHE the safety of food later released to the public. Second, FDA did not communicate its guidance on food sampling to the KDHE officials responsible for overseeing the salvaging operations. FDA relies on such guidance internally to ensure the integrity of analytical data from private laboratories. Both of these problems suggest the need for FDA to be more proactive in helping states manage food salvaging following major disasters. KDHE allowed several million pounds of food salvaged from the Americold fire to be sent to Minnesota on the basis of laboratory results submitted by a consultant to one of the food owners. KDHE officials subsequently learned from FDA that this consultant and his laboratories had been under investigation by FDA and that two of his laboratories were on FDA’s “nonacceptance” list. However, FDA did not provide this information in a timely manner either to its Kansas City District Office or to the KDHE investigators overseeing the salvaging of the food. In April 1992, KDHE asked FDA’s Kansas City District Office for advice on the consultant’s plans for sampling and testing food that had been stored in the Americold cave. The consultant had been hired by a food owner to sample and test the food for chemical and smoke residues. FDA’s district office raised several concerns about the consultant’s plans. However, it provided no information to KDHE about the past performance of the consultant or his laboratories. This information was known within FDA but was not shared with the FDA investigator advising KDHE. FDA’s Division of Field Science in Washington, D.C., maintains and periodically distributes to FDA district offices a “nonacceptance” list of some private laboratories. According to FDA officials, the list provides information about private laboratories that at least one FDA district office has found to be unacceptable for performing certain or all analytical tests. FDA’s district offices may use this information in deciding whether to accept or reject analyses from a particular laboratory. Much of this information is based on enforcement activities in FDA’s program for monitoring imported food. FDA’s information indicated that two of the consultant’s laboratories were unacceptable for performing any analyses. The investigator from FDA’s district office said that he was unaware that such a list existed until June 13, 1992, when he learned of it from a visiting FDA scientist. A month later, he advised KDHE not to accept test results from the consultant’s laboratories. However, the consultant informed KDHE that the analyses were being performed by another laboratory that KDHE, on the basis of discussions with the Minnesota Department of Agriculture, had determined to be reputable. This laboratory was not affiliated with the consultant. On the basis of this information and subsequent laboratory results indicating that the tested food was not contaminated, KDHE allowed the food to be shipped, under embargo, to a Minnesota salvager. KDHE officials later learned from FDA that the consultant himself was the subject of an ongoing FDA investigation concerning the falsification of laboratory data. They said that if they had known this earlier, they would not have allowed the food to be shipped to Minnesota. After the food shipments to Minnesota began, the Minnesota Department of Agriculture asked FDA to test a truckload of cheese. Minnesota state food inspectors were concerned because the containers were covered with dust and smelled of smoke. FDA’s test results showed that some hazardous chemicals, including toluene, were present in the cheese. However, according to an FDA official, the levels of chemicals found did not pose a health hazard. The remaining food held by the salvager was retested by a private laboratory, judged to be safe for consumption, and eventually sold to the public. Officials from KDHE and the Minnesota Department of Agriculture told us that no illnesses have been attributed to this food. FDA has published guidance on food sampling to ensure the credibility, accuracy, and reliability of analytical data from private laboratories. This guidance, which primarily concerns FDA’s regulation of imported foods, was provided to KDHE’s state laboratory but not to the KDHE officials managing the food salvaging operations. The food sampling processes KDHE used in the salvaging operations following the fire lacked some important controls, thereby creating the risk that unsafe food might be released to the public. For example, food owners selected food samples without a KDHE official or other disinterested third party present. In addition, the consultant discussed earlier maintained control over food samples that were to be tested for chemical residues. Although it has no legislative regulatory authority over private laboratories, FDA has internal guidance to help ensure that laboratories performing analyses of FDA-regulated commodities submit scientifically sound data. In March 1992, FDA provided Kansas with its Laboratory Procedures Manual, which spells out recommended sampling controls that FDA uses in monitoring imported foods. Among other things, the guidance recommends that scientific data supplied by private laboratories be obtained by using sound methods of sampling and analysis and that sampling be performed by a disinterested, objective third party. The KDHE officials responsible for overseeing the food salvaging operations said, however, that they were not aware of this guidance because it had been provided only to KDHE’s state laboratory. They also noted that the FDA officials assisting them had not brought this guidance to their attention. They said that if they had been aware of the guidance, they would have required all food owners to hire a disinterested third party to perform food sampling and ensured that the chain of custody over food samples was secured. In discussing FDA’s participation in overseeing the salvaging activities following the Americold fire, FDA officials said they viewed their role as limited to that of a consultant. According to one FDA official, FDA’s role was limited to providing information to KDHE when requested, and FDA was not to anticipate what issues needed to be addressed. KDHE had to make decisions about the release of potentially contaminated food under stressful conditions, including pressure from food owners to expeditiously release the food for salvaging. KDHE relied on FDA, which has considerable experience in dealing with food safety issues, for advice and guidance. However, although the Americold fire was a major disaster with potentially serious consequences resulting from the release of improperly tested food, FDA continued to view its role as that of a consultant— primarily responding to specific requests from KDHE for advice. Such an interpretation may be appropriate for routine salvaging activities; however, this was not a routine operation. Over the years, FDA has developed considerable nationwide experience and expertise in food safety. We believe that in future disasters of this magnitude, in which so much is at stake and improper decisions can adversely affect food safety, FDA should proactively draw upon this expertise and provide stronger leadership in working with states to maintain the safety of the food supply. We recommend that FDA more actively assist states in managing food salvaging operations following major disasters. At a minimum, FDA should ensure that (1) the information it has about private food testing laboratories and key personnel is communicated to state officials responsible for monitoring food salvaging operations after a major disaster and (2) these state officials are made aware of FDA’s guidance for maintaining the integrity of the food sampling process. In commenting on a draft of this report, FDA disagreed with our conclusions and recommendations. FDA described the assistance it provided KDHE and said it had worked very closely with KDHE officials to ensure that the public health was protected and that unsafe food did not reach consumers. FDA stated that following a series of meetings, it was agreed that KDHE was the agency most suited to take the lead in the day-to-day supervision of the salvaging operations and that FDA’s Kansas City District Office would support KDHE in any way required. Overall, FDA said it believed its actions in assisting KDHE were correct and appropriate. With regard to our first recommendation, FDA stated that it would be inappropriate to routinely distribute its “nonacceptance” list of private laboratories to states, noting that (1) FDA does not have a regulatory mechanism for declaring a laboratory or analyst unacceptable, (2) the list could be misconstrued and used inappropriately, and (3) more aggressive distribution of the list could jeopardize FDA’s ability to maintain and internally disseminate information about the laboratories’ performance. With regard to our second recommendation—ensuring that appropriate state officials are made aware of FDA’s guidance on food sampling—FDA said it had provided KDHE with this guidance. FDA maintained that it is the state agency’s responsibility to ensure that individual employees receive copies of pertinent FDA documents. We recognize that FDA supported KDHE in dealing with the salvaging operations subsequent to the Americold fire and have added information to the report to more fully describe the nature of that assistance. However, we continue to believe that lessons learned from the Americold experience can make FDA’s support more effective in future disasters—the overall lesson being that FDA needs to provide stronger, more proactive leadership in assisting states in the aftermath of major disasters. Our report notes that KDHE took the lead in overseeing salvaging operations, with FDA’s Kansas City District Office acting in a consultant’s role—primarily responding to requests from KDHE for assistance—and that such an arrangement was typical in routine salvaging operations. However, the Americold fire and the subsequent salvaging operations were not routine. As FDA itself noted, “this event was unique in that no other fire has involved such a large quantity of food that was exposed to smoke for such a prolonged period of time.” It may be appropriate, in routine circumstances, for FDA to wait until states seek advisory information from it. However, in major disasters, we believe that FDA needs to draw upon its nationwide experience and expertise in food safety and more proactively provide relevant information to state officials responsible for dealing with such an event. Regarding our recommendation that FDA share with states information about private laboratories and key personnel, we recognize that FDA’s “nonacceptance” list is not intended to be a means of certifying a laboratory or declaring it unacceptable and that FDA believes it has no regulatory authority to do so. Furthermore, we understand FDA’s concern that aggressive dissemination of the list could result in inappropriate use of the information on it. Nevertheless, as discussed in our report, the list may contain information of great relevance to state officials making critical decisions affecting the safety of the food supply. To balance the risk of further disseminating FDA’s list with that of withholding potentially important information on it, we have worded our recommendation to say that following major disasters, FDA should communicate information it has about private food testing laboratories and key personnel to state officials responsible for monitoring food salvaging operations. Thus, we are not recommending that the list itself be disseminated, but rather information on the list as well as any other relevant information about the performance of laboratories and key personnel. The form in which FDA wishes to convey this information, as well as any caveats attached to it, is left to FDA’s discretion. Under these circumscribed conditions, we believe that FDA can maintain adequate control over the information to ensure that it is not inappropriately used. With regard to our second recommendation concerning communicating FDA’s guidance on food sampling to appropriate state officials, FDA explained that it had provided its Laboratory Procedures Manual, containing guidance on food sampling controls, to KDHE’s state laboratory, which was not directly involved in food salvaging following the Americold fire. The KDHE officials who were overseeing the salvaging operations were unaware of this guidance, and FDA did not bring it to their attention. We believe that FDA officials assisting states in major disasters should take the initiative to ensure that state officials who are managing the food salvaging operations be made aware of key FDA guidance, such as that pertaining to the food sampling process. Appendix II contains the complete text of FDA’s comments, along with our responses. To obtain information on the food salvaging that occurred after the Americold fire and to identify the lessons learned, we interviewed FDA officials in Washington, D.C., Kansas, and Minnesota; USDA officials in Washington, D.C., and Kansas; and state health officials in Kansas and Minnesota. In addition, we interviewed a food salvager located in Minnesota. We reviewed FDA, USDA, and state records on the Americold fire at the locations listed above. We also reviewed laws and regulations applicable to food salvaging. We conducted our review from June 1994 through January 1995 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days after the date of this letter. At that time, we will provide copies to the appropriate agency heads and interested congressional committees. We will also make copies available to others upon request. Please call me at (202) 512-5138 if you or your staff have any questions. Major contributors to this report are listed in appendix III. The owner of 3.7 million pounds of food hired a consultant to sample and test the food to determine if it could be salvaged. The Minnesota Department of Agriculture agreed to accept food shipped under Kansas’s embargo to a Minnesota salvager. FDA’s Kansas City District Office notified KDHE that the consultant’s laboratories were on FDA’s “nonacceptance” list and advised KDHE not to accept their results. KDHE agreed to accept laboratory results from the consultant after he told them that another laboratory had performed the analyses. KDHE began allowing food shipments to a Minnesota salvager under KDHE’s embargo after the laboratory results showed that the food was safe for human consumption. KDHE recommended that Minnesota’s Department of Agriculture perform organoleptic (sight, smell, taste) evaluations when the food arrived and agreed to lift the embargo upon the Minnesota Department of Agriculture’s recommendation. The Minnesota Department of Agriculture placed a voluntary hold on a cheese shipment and asked FDA to test the cheese. However, the salvager sold the cheese before the laboratory results arrived. FDA’s laboratory results showed that the cheese had contained small amounts of chemicals, including toluene. FDA determined that the chemical levels were not sufficient to warrant action to seize the food. The Minnesota Department of Agriculture required the Minnesota food salvager to retest all the food from Kansas still in storage. The retested food was judged safe for human consumption. No illnesses have been attributed to the food shipped to the Minnesota salvager. The following are GAO’s comments on the Food and Drug Administration’s letter dated January 12, 1995. 1. FDA said the primary purpose of the “nonacceptance” list is to assist the agency’s district offices in reviewing analyses submitted to demonstrate whether products offered for import meet FDA’s requirements. FDA stated that many district offices have little involvement in decisions about imported products and therefore have little reason to become familiar with the list. We believe that individuals located in district offices, regardless of whether they are responsible for domestic or imported commodities, have reason to become familiar with the list, particularly when advising state agencies that may be using these same laboratories and analysts. Furthermore, FDA’s guidance was updated in June 1994 so that laboratories and analysts who have submitted unacceptable analysis for both domestic and imported commodities are included on the list. Therefore, we made no changes to the report. 2. FDA’s Kansas City District Office advised KDHE, on July 13, 1992, not to accept results from the consultant’s laboratories but did not provide information about the consultant’s past performance. KDHE subsequently learned that the consultant had been under investigation for submitting false testing data to FDA. We have changed the chronology to show the date that KDHE was notified about the consultant’s laboratories. 3. Food was shipped to the Minnesota salvager on the basis of laboratory results presented to KDHE, not the Minnesota Department of Agriculture, as stated in FDA’s comments. 4. Our report recognizes that no illnesses have been attributed to consuming food from the cave fire. However, we have no evidence to support FDA’s claim that no dangerous products were consumed, nor have we been provided with test results showing that residue levels did not exceed levels of the same chemicals found in similar food that had not been exposed to the fire. FDA officials told us that they performed laboratory analysis on only two samples of food and did not perform the sampling and testing required by FDA’s own procedures to ensure that the entire lot of food was safe for consumption. The food was sold by the salvager before the tests were completed. 5. GAO visited another FDA office to determine whether food salvaging had occurred following the 1993 Midwest flood. FDA noted that GAO found no deficiencies in FDA’s activities, which, it said, were generally similar to those following the Americold fire. We visited an FDA office in the area affected by the flood and were informed that no salvaging requiring the use of private food testing laboratories was performed. Therefore, this event was not similar to the Americold fire. We did not revise the report. 6. FDA stated that our draft report implied that FDA’s Kansas City District Office did not impress upon KDHE that the consultant was not acceptable and notes that both FDA’s district office and KDHE had ample reason early on to question the consultant’s capability. We continue to believe that FDA did not adequately share information about the consultant’s past performance. While the district office raised questions about the consultant’s sampling and testing plan, it provided no information to KDHE reflecting its concerns about the consultant’s past performance. This information was available elsewhere within FDA, but was not shared with the district office officials who were advising KDHE. In fact, KDHE officials later learned that the consultant was the subject of an ongoing FDA investigation. They said that had they known this earlier, they would not have allowed food to be shipped to the Minnesota salvager. We did not revise the report. 7. FDA contends that KDHE officials are familiar with proper techniques for collecting and safeguarding samples. KDHE officials agreed that this is true for samples collected by their own food inspectors. However, they said that they rarely use private laboratories in their routine food inspection activities and that FDA has much more experience in dealing with private laboratories. We have recommended in our report that following major disasters, FDA ensure that state officials responsible for overseeing food salvaging operations are made aware of FDA’s guidance for maintaining the integrity of the food sampling process. 8. Our report acknowledges that FDA’s guidance on third-party sampling is a recommendation, not a requirement. However, KDHE officials said that had they known of FDA’s guidance, they would have required all food owners to hire a disinterested third party to perform food sampling and ensure that the chain of custody over food samples was secured. 9. We have added this sentence to the background section of our report. 10. We agreed with this comment and removed the word “health.” 11. We agreed with this comment and have revised the report. 12. According to FDA’s Investigations Operations Manual, subchapter 940, paragraph 942, “Except in unusual circumstances, FDA responsibilities are to assist the state and local health agencies in removing, destroying or reconditioning affected merchandise. In situations involving interstate movement of merchandise; large interstate firms; areas in which state or local political ramifications are anticipated; or when state or local health officials so request; FDA may assume the primary role in the operation.” We included this statement to show that in major disasters, FDA may take on a stronger leadership role if it chooses to do so. We do not say, nor do we mean to imply, that KDHE was in any way influenced by political ramifications. 13. We agreed with this comment and have revised the report. 14. We agreed with this comment and have revised the report. 15. We agreed with this comment and have revised the report. 16. We believe that the Americold fire—an event that FDA described as “unique in that no other fire has involved such a large quantity of food that was exposed to smoke for such a prolonged period of time” and that resulted in the destruction of over 143 million pounds of food—can appropriately be described as a major disaster. Similarly, we do not question the fact that FDA supported KDHE. However, we believe that its support could have been more effective had it provided stronger, more proactive leadership. 17. We agreed with this comment and have revised the report. Alan R. Kasdan, Assistant General Counsel The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the events surrounding a fire at a food storage warehouse in Kansas, focusing on the: (1) disposition of food salvaged from the facility; and (2) lessons learned from the incident that could be used to improve regulation of the food salvaging industry. GAO found that: (1) over half of the affected food was destroyed and the remaining 102 million pounds of food was released to the public after Kansas determined its salvageability; (2) about 3.7 million pounds of food was shipped to a salvager on the basis of laboratory results furnished by a consultant who was under investigation by the Food and Drug Administration (FDA); (3) although no illnesses were attributed to the food salvaged from the Kansas fire, potential public health risks were increased by shortcomings in FDA regulation of salvaged food; (4) FDA did not share important information with Kansas regarding its past problems with the consultant and his laboratories; and (5) FDA did not provide Kansas with guidance on food sampling controls that would have been useful in its oversight of the salvaging. |
Agencies use vehicles in many ways, as vehicles support agency efforts to achieve various mission needs. These needs can be diverse, as demonstrated by the vehicle uses of the five agencies we selected for review: ferrying clients, conveying repair equipment, hauling explosive materials, and transporting employees, among others (see table 1.) Agencies may own or lease the vehicles in their fleets and are responsible for managing their vehicles’ utilization in a manner that allows them to fulfill their missions and meet various federal requirements. For example, agencies determine the number and type of vehicles they need to own or lease and when a vehicle is no longer needed to achieve the agency’s mission. Statutes, executive orders, and policy initiatives direct federal agencies to, among other things, collect and analyze data on costs and eliminate non-essential vehicles from their fleets. For example, every year agencies provide an update on their progress in achieving the inventory goals determined by their Vehicle Allocation Methodology (VAM), such as the type and number of vehicles in their fleets. These updates are reviewed by GSA’s Office of Government-wide Policy (OGP), which provides feedback on agencies’ submissions. Federal provisions on vehicle justifications and determining what makes a vehicle “utilized” are detailed in the Federal Property Management Regulations (FPMR). Specifically, the FPMR provide how agencies can define utilization criteria for the vehicles that they use. According to GSA’s OGP, the only requirement in the utilization portion of the regulations is for agencies to justify every full-time vehicle in their respective fleets, though the regulations do not specify how these justifications should be conducted. The FPMR recommend—but do not require—that the annual mileage minimum for passenger vehicles be 12,000 miles, and 10,000 miles for light trucks. However, according to GSA officials, mileage is not the only appropriate indicator of utilization for some vehicles’ missions. For example, GSA officials stated that it would be inappropriate to set a mileage expectation for an emergency responder vehicle or a vehicle that supports national security requirements because those vehicles are only needed in specific circumstances and may not accrue many miles. Thus, the FPMR state that the aforementioned mileage guidelines “may be employed by an agency… other utilization factors, such as days used, agency mission, and the relative costs of alternatives to a full time vehicle assignment, may be considered as justification where miles traveled guidelines are not met.” Therefore, according to GSA officials, agencies are allowed to define their own utilization criteria, which may include adopting the miles-traveled guidelines from the FPMR, using mileage minimums above or below the FPMR, or employing other metrics. According to GSA officials, agencies may choose to define their selected utilization criteria in their internal policies, and vehicles meeting those criteria would be considered justified under the regulations. However, if a vehicle does not meet the utilization criteria specifically described in agency policy, the FPMR permit agencies to individually justify a vehicle using criteria the agency finds appropriate for that specific vehicle. The regulations do not specify the frequency with which the justifications (either as determined by agency policy or individually determined) must be conducted, updated, or reviewed. Agencies decide what vehicles are needed to help the agency meet their mission at any given point in time. While GSA provides guidance, the ultimate decision-making power lies with the agency leasing the vehicle. Federal agencies can use GSA Fleet to acquire leased vehicles. According to GSA, under this arrangement an agency informs GSA Fleet what kind of vehicle is necessary for its mission. GSA Fleet fulfills the agency’s request by either purchasing a new vehicle (owned by GSA but leased to the agency), or providing a vehicle from GSA’s existing inventory (owned by GSA and previously leased to another agency). GSA Fleet’s primary mission is to provide the “best value” to its customers and the American people. GSA Fleet’s leasing rates are designed to recover all costs of its leasing program, but the exact cost of a lease depends on the type of vehicle and the number of miles traveled during the lease period, among other factors. For example, a conventionally fueled subcompact sedan has a 2015 fixed rate of $153 per month and mileage rate of $0.13 per mile traveled. GSA Fleet’s fixed rate is designed to cover fixed costs such as GSA Fleet staff and vehicle depreciation, whereas the mileage rate is designed to cover variable costs such as fuel and maintenance. Agencies are responsible for any costs associated with damage or excessive wear and tear over the course of the lease— typically 3-7 years for a passenger vehicle. We previously reported that, according to GSA officials and fleet managers from military and civilian fleets, GSA Fleet’s vehicle lease rates are typically lower than the commercial sector and provide a more economical choice for federal agencies. GSA Fleet collects data on leased vehicles to assist with billing as well as help agencies manage their leased-vehicle fleets. GSA Fleet’s Fleet Management System (FMS) contains most of this data. The portal used by agencies to access the data in GSA’s FMS is called Drive-thru. Drive- thru offers a suite of applications, including tools to analyze crash data and report mileage. As Drive-thru is the primary portal through which customers can access GSA’s leasing data, some customers refer to the underlying database as Drive-thru as well. While Drive-thru is the name of the exterior-facing access portal rather than the database itself, we will refer to the database as Drive-thru for the purposes of this report to reflect the language commonly used by GSA’s leasing customers. Drive-thru stores hundreds of data elements on each vehicle, including manufacturer-provided information such as make, model, and fuel efficiency; agency-reported data such as monthly mileage; and data obtained through fleet cards (charge cards) such as quantity and type of fuel purchased. Agencies can import information from Drive-thru into their own internal fleet management systems and, according to multiple agency officials, generally rely on GSA Fleet to ensure Drive-thru’s accuracy, as identifying and correcting erroneous data can be time consuming and difficult. However, agencies can change the data they receive from Drive-thru after data enter an agency’s internal fleet management system but before they are externally reported. GSA’s OGP co-manages and co-funds a web-based reporting tool—the Federal Automotive Statistical Tool (FAST)—with the Department of Energy (DOE). FAST gathers data from federal agencies about their owned and leased vehicles to satisfy a variety of federal-reporting requirements, including the annual Federal Fleet Report. According to the Office of Management and Budget (OMB), it is the leasing agencies, not GSA or DOE, which are responsible for the accuracy of the data agencies report to FAST. As a result, while GSA’s OGP helps compile the information from FAST that populates the Federal Fleet Report, the accuracy of the Federal Fleet Report is dependent on the accuracy of the data that agencies report to FAST. The Federal Fleet Report provides an overview of federal motor vehicle data, such as number of vehicles and related costs. A comparison of the reports from fiscal years 2012 through 2014 shows that the overall quantity of leased vehicles varies slightly from year to year, but the costs have consistently decreased. For example, in fiscal year 2013, federal agencies leased 183,989 vehicles at a cost of approximately $1.06 billion. In fiscal year 2014, federal agencies leased slightly more vehicles— 186,214—but the costs dropped to $1.03 billion, as shown in table 2. GSA officials explained that the cost reduction is attributable in part to agencies’ decisions to lease smaller, less expensive vehicles. Although GSA collects and reports information on leased vehicles, GSA does not have responsibility for tracking how agencies use vehicles or identifying underutilized vehicles. Nevertheless, some of the services that GSA Fleet provides are related to utilization. For example, to help streamline customers’ vehicle leasing experiences, in 2014 GSA employed approximately 330 liaisons called Fleet Service Representatives (FSR). FSRs are expected to answer local customers’ questions about vehicle acquisition, provide assistance when vehicles need services, and help customers understand the various leasing terms and products offered by GSA Fleet. According to GSA Fleet, FSRs should discuss utilization with leasing customers at least annually as part of other business discussions. We found the data we reviewed in Drive-thru to be generally reliable as GSA has taken steps to ensure that the data are reasonable, although a few data elements have indications that those data could be more accurate. While GSA is not responsible for the accuracy of data in FAST, it has taken appropriate steps to ensure the data are reasonable. GSA is responsible for ensuring that the information that it is providing to customers in Drive-thru is reliable (i.e., both reasonable and accurate). It is important that data in Drive-thru are reliable because reports that are generated via Drive-thru represent a service that GSA is directly providing to customers to help them manage their fleets. Agencies also use Drive- thru when fulfilling federal fleet reporting requirements. For example, agencies can download a report about their leased vehicles from Drive- thru. The report then can be directly uploaded into FAST to meet annual reporting requirements on the leased fleet’s size and costs. Incorrect data in Drive-thru can therefore hinder agencies’ abilities to manage their leased fleets or could compromise the integrity of federal reports. A basic test of reliability is whether the data are reasonable. Using the guidance provided in three key sources, we developed an analytical framework for measuring the “reasonableness” of data, as there is currently no universally accepted standard for such a measurement. Each of these key sources discusses three topics, which we use as our standard for reasonableness of data: (1) electronic safeguards, such as error messages for out-of-range or inconsistent entries; (2) a review of data samples to ensure that key fields are non-duplicative and sensible; and (3) clear guidance to ensure consistent user interpretation of data entry rules. Based on the data we reviewed, we found that GSA has taken appropriate steps to ensure the selected Drive-thru data are reasonable. Specifically, GSA uses electronic safeguards when data are entered into Drive-thru. For example, error messages appear if a user enters an odometer reading such as 12345, 99999, 00000, 654321 or a reading that differs 9,999 or more miles from the previous month’s entry. Similarly, GSA uses a validation program to catch vehicle identification number (VIN) entry errors. VIN barcodes are scanned into GSA’s system unless they must be manually entered due to barcode damage. For both scans and manual entries, software validates that the entered VIN meets the check digit calculation. In addition, GSA verifies some data during reconciliations and other post- entry checks. For example, customer mileage entries are routinely monitored by GSA’s Loss Prevention Team (LPT) for abnormal inputs. If entries for a specific vehicle are consistently nonsensical, the LPT reviews the activity for signs of fraud and, if likely fraudulent, forwards to the appropriate Inspector General’s office for investigation. For entries that are consistently nonsensical but are not likely fraudulent, the LPT notifies the designated FSR for follow-up with the customer. The FSR is then tasked with emphasizing to the customer the importance of entering valid odometer readings in the future. Lastly, GSA reported that it provides guidance on how to enter vehicle- related information into Drive-thru to the people who are responsible for entering different types of data. Generally, information about the vehicle itself is the responsibility of GSA or its agents (such as contractors— known as “marshallers”—who enter manufacturer-provided data at the time GSA receives the vehicle). GSA provides a handbook to marshallers that explains how the marshallers should use the software that collects information and transmits it to GSA’s system. Similarly, GSA provides a Drive-thru guide to customers that explains how customers should enter certain types of information into Drive-thru; however, GSA does not provide instructions regarding how customers should inform GSA if their contact information will change. The lack of such guidance may have been a contributing factor in the inaccuracies we found in the customer contact data, as discussed in the next section on indications of accuracy in Drive-thru data; however, according to GSA officials, planned changes to GSA’s customer ID protocols will remove the need for such guidance in the future. A second test of data reliability is accuracy; however, we tested for indications of accuracy in the data, as verifying the data accuracy itself would have required extensive examination of individual vehicles, which was beyond the scope of this review. We performed tests on a selection of nearly two dozen Drive-thru data elements from May 2015 for selected vehicles and determined that there are numerous indications of accuracy associated with the data we reviewed. For example: Almost 100 percent of 9 vehicle inventory fields, including make, manufacturer name, fuel type, VIN, and model year, have no missing data. One vehicle was missing the manufacturer name. Three entries indicated the presence of a luxury manufacturer entry (all for Audi), an error rate of less than one hundredth of one percent. .07 percent of records for sedan fuel tank sizes exceeded 20 gallons. Although sedan fuel tank sizes vary and can change from year to year, few midsize sedans have 20 gallon tanks. Therefore, fuel tanks larger than 20 gallons might indicate a data error. Despite the overall indications that the selected Drive-thru data are accurate, there are three areas where we found indications that the data may be less accurate than the other information we studied: fuel type coding, odometer entries, and customer contact data. According to federal internal controls standards, data collection applications—including electronic safeguards such as logic and edit checks—should ensure that all inputs are correct in order to facilitate accountability and effective stewardship of government resources. First, we found that while most fuel-type-coding data appear to be accurate, gas stations coded pumps incorrectly in at least some cases from January through April 2015, and possibly in as high as 46 percent of cases. For example, drivers of vehicles with E-85 fuel types were reported to have purchased compressed natural gas or biodiesel. We were not able to determine the precise number of instances where fuel had been miscoded; however, because some vehicles use more than one type of fuel—for example, “flex fuel” vehicles can operate on either regular gasoline or an alternative fuel known as E-85, which is a blend of gasoline and ethanol. Given the data available, we were not able to determine which fuel the user actually selected and were thus unable to determine which purchases were coded incorrectly by the gas station. The high end of the error range (46 percent) would mean that each uncertainty was resolved as a fuel-pump-coding error by the gas station, an error that GSA officials said was extremely improbable. These officials noted that they believed the actual error rate was substantially lower. However, GSA officials agreed that pump miscodings compromise data accuracy and noted that GSA has worked with fueling station owners and relevant associations to reduce fuel pump miscodings. However, GSA officials stated their ability to affect change is highly limited, as the miscodings occur at the point of sale and there is no incentive for the fueling stations to correct the miscodings. In addition to fuel type miscodings, we found that 3 percent of monthly odometer entries in May 2015 were lower than the previous month’s odometer reading. An odometer reading that decreases from one month to the next indicates that there was an error at some point in time—either the previous month’s entry was too high, or the current month’s entry is too low. Monthly odometer readings are supplied by agencies as part of the billing process, and odometer errors result in temporary billing errors as agencies pay additional fees based on mileage. GSA officials stated that they cannot be certain of a vehicle’s odometer reading until the vehicle is returned to them at the end of the leasing period and that they typically depend on the leasing agency to correctly report the odometer readings. According to GSA officials, as part of the monthly odometer- data collection process GSA’s system warns users that they may have entered incorrect data if the reported odometer reading is 9,999 miles greater than or less than the previous month’s odometer reading. Users would then be able to correct the data before submitting it to GSA. GSA officials stated that they chose the 9,999 mile warning point because they did not want the system to generate cautionary messages to customers when there was a valid reason for the mileage difference. The officials explained that there are legitimate reasons why the previous month’s odometer reading might be higher than the current month’s reading. For example, if the agency relied on GSA to estimate mileage in the previous month and the estimate was too high, the agency’s correction in the current month could result in a lower odometer reading. GSA officials said that they did not want the system to incorrectly flag these instances, and that they have no plans to evaluate the current safeguard. However, using such a large mileage difference to trigger a warning means that GSA may be unlikely to catch the majority of errors. We found 52 cases where the mileage difference was 9,999 miles or greater, but more than 4,800 cases where the previous month’s odometer reading exceeded the current month’s reading. We also found that the average monthly odometer difference for our selected vehicle data is 564 miles per month, with 95 percent of vehicles driving less than 2,482 miles per month, as shown in table 3. Although the resulting billing errors can be resolved the following month and the overall error rate is low, resolutions take time and resources for both GSA and the customer. Evaluating the current warning and adjusting it accordingly could help improve the accuracy of the data and therefore help reduce these costs, and GSA officials stated that changing the existing safeguard would not be costly. Further, GSA’s edit check for odometer readings is not consistent with federal internal control standards that call for agencies to pursue data accuracy when possible and cost- effective. Lastly, we found that customer contact data, such as the name and e-mail of the individual whom GSA should contact for vehicle-related services, is not always correct. As mentioned previously, GSA’s customer-leasing guide does not provide guidance regarding how customers should proceed if the vehicle’s point of contact will change. In addition, according to GSA officials, the customer ID number—which is how customers sign in to Drive-thru—is associated with the customer’s fleet, not the customer points of contact themselves. As a result, customer contact data are updated manually by FSRs after FSRs detect a problem, such as a returned e-mail after the previous point of contact leaves the agency. Several FSRs stated that the manual updates are time-consuming. Moreover, one FSR we interviewed stated that the current process relies on the initiative of FSRs to ensure accuracy. Without accurate customer contact data, it is more difficult for FSRs to communicate with agencies about vehicles, including whether certain vehicles are still needed. Two FSRs stated that turnover in customer agency fleet management is high. Such turnover exacerbates the difficulty associated with maintaining the accuracy of these data. According to GSA officials, planned changes to Drive-thru in 2016 will resolve this issue, as customer IDs will no longer be assigned to a fleet. Rather, each customer will have that individual’s own individual user account, profile, and password. In addition, the customer ID will be the individual customer’s e-mail address instead of a number, a step that GSA officials anticipate will resolve the difficulties associated with updating the user contact information. GSA is not responsible for the accuracy of data reported to FAST, a data collection system that GSA co-manages with DOE. Rather, OMB’s Circular A-11 provides that agencies are responsible for reviewing and correcting fleet data prior to submitting them through FAST. However, GSA’s OGP has a role in ensuring the reasonableness of FAST data as a partner in the FAST management team. In this role, GSA focuses on data relevant to fleet management, such as overall inventory, cost, and utilization metrics. We found that GSA’s OGP has taken appropriate steps to ensure the fleet management data reported to FAST are reasonable. Specifically, (1) GSA is aware of the electronic safeguards built into FAST for fleet management data; (2) GSA examines some of the data after it is submitted by agencies and flags entries for correction; and (3) GSA provides guidance to agencies on how to properly enter information into FAST. According to GSA, it shares responsibility with DOE for implementing and managing electronic safeguards for FAST. GSA and DOE collaborate to implement logic checks, which both parties use to determine the reasonableness of the data. We also found that GSA has a process for reviewing data after they are entered by an agency. If, for example, a significant increase in a specific type of fuel use is not matched by a similar increase in inventory, mileage, or cost, then GSA flags the data for verification with the agency. While it is not known how often GSA finds entries that it recommends for agency review, GSA reported that during both the 2013 and 2014 FAST reporting cycles, a few agencies experienced difficulties that required GSA to help resolve data issues (for example, re-opening FAST after the close of the data call). Lastly, we found that GSA provides guidance to agencies on how to properly enter information into FAST in a variety of formats, including (1) written instructions to users, (2) written instructions to administrators, (3) presentations at quarterly meetings, (4) one-on-one sessions with individual agencies upon request, (5) online demonstrations, and (5) official guidance in the form of Federal Management Regulation (FMR) bulletins. GSA has a limited role in identifying and reducing underutilized leased vehicles, as agencies are responsible for managing their vehicle fleets. GSA is not responsible for monitoring agencies’ vehicle utilization policies. Rather, according to GSA officials, GSA focuses on providing guidance and advice to federal agencies on utilization by (1) developing written guidance and reviewing agencies’ Vehicle Allocation Methodology (VAM) update submissions and (2) holding conversations with federal agencies’ fleet managers about vehicle utilization at least annually. GSA’s OGP provides written guidance in the form of bulletins to federal agencies to implement legislation, executive orders, and other directives, but agencies are not legally required to follow this guidance. For example, in May 2011, a Presidential Memo (implementing a 2009 Executive Order) required GSA to develop and distribute VAM guidance to federal agencies for determining their optimum fleet inventory. In response, GSA provided such guidance to agencies in August 2011. Specifically, the guidance directed agencies to survey the utilization of vehicles each year, but agencies were not required to follow the guidance and some agencies chose to continue using their existing processes even though those processes differed from the GSA guidance. For example, some agencies’ fleet managers (including those from NASA, according to NASA officials, and those from the U.S. Navy, according to GSA officials) believed that the processes they already had in place fulfilled the intention of the guidance. In addition to providing written guidance, GSA has voluntarily reviewed utilization information covered in agencies’ VAM update submissions and has sometimes made broad recommendations to agencies based on those reviews. For example, in the 2014 VAM review, GSA recommended that all executive federal agencies establish and document specific vehicle utilization criteria for motor vehicle justification, that the criteria be reviewed at least annually, and that action be taken when underutilized vehicles are identified. GSA officials told us that another aspect of the agency’s role in identifying and reducing underutilized leased vehicles is to provide advice to federal agencies’ fleet managers at least annually through conversations about utilization. According to GSA officials, this advisory role is intended to help the federal government save money by providing agencies with support needed to make wise business decisions. In addition, GSA officials explained that during conversations with fleet managers, FSRs might discuss the agency’s overall fleet size, vehicle replacement options, or may suggest that a larger vehicle is no longer needed when a smaller one will suffice. For example, one NASA fleet manager told us that his FSR coordinated the exchange of two larger vehicles in his fleet for two smaller vehicles for the purposes of downsizing and reducing fuel consumption. To improve our understanding of these utilization conversations and to examine their usefulness, we sent a non-generalizable survey to 68 fleet managers for our five selected federal fleets. While the responses are not representative of either the experiences among our five selected agencies or the federal fleet as a whole, they do provide insight into activities that are otherwise undocumented. Fifty one fleet managers responded, with the majority of them (41) reporting either having decision- making authority or collaborating with their supervisor to make decisions about vehicle acquisition and disposal. Of the 41 respondents with a role in the vehicle’s acquisition and disposal decision-making process, 27 responded that their FSR has communicated with them about leased- vehicle utilization based on mileage. The majority of those decision- makers—25 of the 27—said that these communications were moderately to extremely useful in helping them to manage their leased-vehicle utilization based on mileage. However, 18 of the 51 overall respondents (including 14 of the 41 respondents with an acquisition and disposal decision-making role) said that they had never discussed utilization based on mileage with their FSR. GSA’s management told us that it believes these conversations are occurring, but may not include the word “utilization,” a situation that could explain, in part, why some of our survey respondents reported never having discussed utilization with their FSR. According to GSA officials, the expectation is inherent to the role of the FSR and is made clear to them through training. However, we found indications that not all FSRs are discussing utilization with agency fleet managers. GSA’s management does not have a mechanism to help ensure that these conversations are occurring as expected. As a result, GSA may not be able to identify opportunities for FSRs to better assist agencies in identifying and managing their underutilized leased vehicles. Establishing such a mechanism would be consistent with federal internal control standards, which state that agencies should have reasonable assurance that employees are carrying out their duties and that feedback is provided in the event that expectations are not met. While GSA generally focuses on providing guidance and advice, it has regulatory authority to repossess federal agencies’ leased vehicles in some instances, including cases where agencies cannot produce justification for the vehicle. Specifically, the FPMR state that if GSA requests justification for a vehicle, agencies must provide it. If the agency does not provide justification for that leased vehicle, GSA may withdraw the vehicle from further agency use. GSA officials told us that it does not exercise this authority because it would be a significant cost and time burden for GSA to review these justifications. Some of the agencies we reviewed could not determine if vehicles met utilization criteria, could not provide justifications for vehicles, or kept vehicles that had been determined were not needed. In total, we identified shortcomings in agency processes that affected leased vehicles with an annual cost of approximately $8.7 million. While the FPMR provide general mileage guidelines that can be used as criteria for vehicle utilization—12,000 miles per year for passenger vehicles and 10,000 miles per year for light trucks—it also authorizes agencies to develop their own criteria to determine vehicle utilization where miles-traveled guidelines are not appropriate. GSA officials stated most vehicles will not meet these guidelines and that agencies are expected to adopt criteria that reflect the vehicles’ mission. The agencies in our review used a wide variety of utilization criteria, as shown in table 4. One of the five agencies—BIA—uses the FPMR mileage guidelines as its criteria. Three other agencies—Air Force, NPS, and VHA—use the FPMR mileage guidelines for some (but not all) vehicles. NASA does not use the FPMR guidelines as criteria; NASA uses miles-traveled criteria that are lower than the FPMR guidelines. Analyzing the appropriateness of each utilization criteria was beyond the scope of this report. According to GSA officials, all utilization criteria—including mileage criteria below FPMR guidelines—are allowed under the FPMR. While three of our five selected agencies use mileage criteria below FPMR guidelines for at least some vehicles, they are not the only agencies doing so. For example, in fiscal year 2013, the Inspector General (IG) for the Department of Energy (DOE) found one DOE facility used 2,460 miles per year, an average of 205 miles per month, as its utilization criteria. Agencies provided a variety of explanations for the utilization criteria they selected: Air Force officials stated their vehicles serve very diverse mission needs. In order to ensure they have the right vehicle for each mission need, they developed a software algorithm with over 2,600 criteria that are not all utilization-based. Some criteria include the cost of alternatives and the criticality of a vehicle’s contribution to the mission. According to BIA officials, the FPMR’s miles-traveled guidelines are appropriate utilization criteria for their fleet because their vehicles typically travel long distances across remote areas to meet their mission. NPS officials stated they used the FPMR’s miles-traveled guidelines as criteria for leased vehicles because the criteria provide the right metrics to meet department needs. VHA uses the FPMR’s miles-traveled guidelines as well as other miles-traveled metrics and days per month as utilization criteria, which an official said reflects the agency mission of delivering health care. Vehicles only need to meet one criterion to be considered utilized. NASA uses miles-traveled utilization criteria that are lower than the FPMR miles-traveled guidelines. NASA policy requires each NASA center to set utilization criteria at 25 percent of the average miles traveled for each vehicle type at their center (see app. II for a list of NASA utilization measurements by center). NASA officials stated they believe this approach is an acceptable business practice, which the agency has used for more than 20 years. We found 71 percent of the vehicles we selected from the five agencies met these agency-defined criteria, as shown in table 4. For two agencies—NASA and VHA—we found that the agencies’ processes for managing utilization data did not always facilitate the identification of underutilized leased vehicles, although both agencies have taken steps to rectify the identified issues. Specifically, we found: NASA did not apply its utilization criteria to 41 vehicles at its Armstrong Flight Research Center because, according to NASA officials, the center’s transportation officer retired in 2013 and the replacement did not apply utilization criteria in fiscal year 2014. Without utilization criteria, the center could not determine which vehicles from this center were utilized in fiscal year 2014. The agency paid approximately $137,000 for these vehicles in fiscal year 2014. According to NASA officials, the center’s transportation officer conducted a utilization analysis for these vehicles in fiscal year 2015 and the center will continue to follow NASA policy in the future. VHA did not safeguard vehicle utilization data at one VHA medical center, as a new employee deleted vehicle utilization data from 2008- 2014. This prevented the agency from presently determining whether 343 vehicles had met the utilization criteria in fiscal year 2014. The agency paid more than $1.1 million to GSA in fiscal year 2014 for these vehicles. A VHA official said the agency was previously unaware vehicle utilization data from that medical center had been deleted from the Fleet Management Information System (FMIS) and have counseled the employee responsible regarding the error to ensure that the data are retained in the future. If vehicles do not meet utilization criteria defined in agency policy, the FPMR provides that agencies must justify vehicles in another manner. The FPMR do not specify how agencies should conduct these justifications or how the justifications should be documented. While the FPMR state that agencies may be required to provide written justification, the regulations do not require agencies to clearly document the justifications before a request to provide such documentation is made. Federal internal control standards state that all transactions and significant events need to be clearly documented and that the documentation should be readily available for examination. Four of the five agencies in our review could not readily provide justifications for vehicles that had not met utilization criteria defined in agency policy. Cumulatively, these agencies spent approximately $5.8 million in fiscal year 2014 on vehicles where individual justifications could not be located in a timely manner, as shown in table 5 below. Without readily available documentation, the agencies could not determine whether they had justified these vehicles, and whether any of these vehicles should be eliminated from agency fleets. Air Force officials could not readily provide the justifications for 413 vehicles that did not meet the utilization criteria in its software algorithm. The agency paid $1.5 million to GSA in fiscal year 2014 for these vehicles. According to officials, vehicles that do not meet the utilization criteria in the Air Force’s algorithm are subject to the agency’s justification process, the results of which are stored in the agency FMIS. However, we found that the Air Force’s FMIS does not include information on all agency vehicles. Agency officials said justifications for these 413 vehicles are not stored in the Air Force’s FMIS and would be difficult to locate because these vehicles are used by the Air National Guard, which has its own justification process. However, Air Force is administratively responsible for these vehicles, according to agency officials. BIA officials could not readily provide the justifications for 282 vehicles that did not meet utilization criteria. The agency paid $1.2 million to GSA in fiscal year 2014 for these vehicles. According to these officials, justifications are documented via e-mail, and it would be very challenging to search e-mail for these records as there was no universal format. Moreover, BIA officials said some of the justifications were reviewed by a fleet manager who left the agency, and they were unsure how to retrieve records from that individual’s e-mail account. Interior officials stated they will replace BIA’s e-mail process with a standardized form accessible through Interior’s FMIS in fiscal year 2016. NASA was able to provide the justifications for all of its vehicles where it applied utilization criteria and the criteria were not met. NASA policy requires NASA centers to use Vehicle Utilization Review Boards (VURB) to approve or deny justifications for vehicles that do not meet utilization criteria. All vehicles that are reviewed by VURBs have an individual justification form, and all VURBs submit a summary document of their reviews to headquarters officials. NPS officials could not readily provide justifications for 645 vehicles because those justifications were not stored within the agency’s FMIS. The agency paid $2.5 million to GSA in fiscal year 2014 for these vehicles. While NPS designed its justification forms to be stored within Interior’s FMIS, we found none of these forms had been uploaded to the system. In order for NPS officials to determine which of its vehicles had been justified, they would need to locate these 645 forms, which officials said were stored in field offices. Interior officials told us they were unsure why some of NPS’ forms were not stored in the agency’s FMIS but they plan to upload the forms to the system. VHA was unable to locate justifications for 181 vehicles for which it had data indicating that the vehicle had not met VHA’s utilization criteria. The agency paid $0.6 million to GSA in fiscal year 2014 for these vehicles. According to VHA officials, justifications are stored with local fleet managers and are not readily accessible to headquarters officials. Agency officials said that the justification system was developed to assist local fleet managers and that previously, it was not necessary for headquarters to access these records. The finding that four of the selected agencies’ processes did not allow them to consistently determine which of their vehicles are justified is consistent with the findings of other agencies that have examined their vehicle fleets. For example, in 2014 the Inspector General (IG) for the Department of Homeland Security (DHS) reported that DHS could not determine whether or not certain vehicles that did not meet the agency’s utilization criteria were justified. The IG estimated DHS’s cost to operate these vehicles in fiscal year 2012 was between $35.3 and $48.6 million. As a result of our review, two of the selected agencies—BIA and NPS— have plans to modify their systems accordingly to provide accessible justification documentation. Without readily available justification documentation, agencies are limited in their ability to exercise oversight over vehicle retention decisions, including how many vehicles—if any— should be eliminated. Further, the FPMR do not specifically require that agencies document all of their justifications in writing or store the justifications in a readily accessible location. Federal internal control standards on record keeping and management call for the accurate and timely recording of transactions, such as justification decisions and call for the documentation to be readily available for examination. We found that without such readily available documentation, four of the five selected agencies in our review could not determine whether they had justified some of their vehicles and whether any of those vehicles should be eliminated from agency fleets. According to GSA officials, the agency has not reviewed the FPMR to determine if the regulations should be amended to be more specific about vehicle justification documentation, and they have no plans to do so. As a result, GSA may be missing an opportunity to help ensure that agencies are appropriately justifying all vehicles in their fleet and determining if their leased-vehicle fleets contain vehicles that should be eliminated. In addition to the vehicles where agencies could not locate justifications in a timely manner, three agencies kept vehicles that did not pass their justification process. The FPMR do not require agencies to take any action for unjustified vehicles, which are vehicles that neither meet the agency’s utilization criteria nor pass the justification process. However, federal internal control standards call for agencies to be accountable for stewardship of government resources. All five selected agencies have established approaches to address unjustified vehicles, which can include placing them into a shared pool, transferring them to a new mission, rotating them with higher-mileage vehicles, or eliminating them from their fleet. All five selected agencies took actions to reduce vehicles that did not meet utilization criteria or pass the justification process; yet three agencies cumulatively retained over 500 such vehicles, paying GSA $1.7 million for these vehicles in fiscal year 2014. See table 6. Specifically, we found that: NPS retained 109 vehicles that did not meet agency-defined utilization criteria and did not pass the agency’s justification process. The agency paid GSA $0.4 million in fiscal year 2014 for these vehicles. VHA retained 393 vehicles that did not meet agency-defined utilization criteria and did not pass the agency’s justification process. The agency paid $1.3 million to GSA in fiscal year 2014 for these vehicles. VHA policy does not require justification for all vehicles that do not meet utilization criteria. As a result, these 393 vehicles were never subject to a justification process even though they did not meet utilization criteria. VA officials said that returning vehicles to GSA would not lead to cost savings because GSA will continue to charge the agency for the vehicle until a new lessee is found. GSA officials said that only in cases where a large number of vehicles are prematurely returned at once does GSA continue to charge the leasing agency for the vehicles. VA officials stated that they do not believe that this policy is applied consistently. NASA retained one vehicle that did not meet agency-defined utilization criteria in fiscal year 2014 and did not pass the agency’s justification process. NASA officials explained that the vehicle was incrementally removed from service in fiscal year 2015 to ensure that mission requirements would not be negatively impacted. NASA has since returned its unjustified vehicle to GSA. While these findings are not generalizable, they are consistent with several findings from agency inspectors general that have reported agencies keeping vehicles even though they did not meet agency’s utilization criteria or pass the agency’s justification process. For example, in 2013 the DOE IG found one DOE component retained 234 vehicles— 21 percent of the component’s fleet—even though the vehicles did not meet utilization criteria and users had not submitted justification for their retention. Similarly, in 2015 the DHS IG found that the Federal Protective Service had not properly justified administrative vehicles and spare law enforcement vehicles in its fleet, valued at more than $1 million fiscal year 2014. Internal controls call for agencies to be accountable stewards of government resources. However, agency processes do not always require that every vehicle undergo a justification review or that vehicles are removed if they do not pass a justification review. Agency processes that do not facilitate the removal of underutilized vehicles hinder agencies’ abilities to maintain efficient vehicle fleets. Without processes to ensure that underutilized vehicles are consistently removed, agencies may be foregoing opportunities to reduce the costs associated with their fleets. The cost savings achieved by eliminating unjustified vehicles may be less than the cost paid to GSA because agencies may need to spend resources on alternative means to accomplish the work performed by these vehicles. For example, while an agency would save the monthly cost of leasing an eliminated vehicle, another vehicle in the agency’s fleet may need to travel more miles if it performs functions previously performed by the eliminated vehicle. This may increase leasing costs for the remaining vehicle. Nonetheless, by not taking corrective action, agencies could be spending millions of dollars on vehicles that may not be needed. Given the approximately $1 billion dollars that are spent annually on leased federal vehicles and the government-wide emphasis on good fleet management, it is critical for agencies to have reliable data and sound management practices. While GSA has taken a number of positive steps to assist agencies in managing their fleets, there are more actions it can take. For example, GSA’s current 9,999 odometer reading warning allows for large odometer discrepancies before warning users of a potential error, leading to potentially inaccurate odometer readings that can result in potentially inaccurate billing and additional staff time for subsequent correction. Evaluating the current warning and adjusting it accordingly could help improve data accuracy and therefore help reduce these costs. Additionally, while customers report that utilization-related conversations with FSRs are helpful, GSA does not have a mechanism to know the extent to which these conversations are taking place as expected. As a result, GSA may be missing a potential opportunity to help agencies ensure that their leased fleet is the optimum size. Furthermore, while the FPMR provide some guidance to federal agencies on how to justify vehicle utilization, they do not require agencies to have clearly-documented justifications available for examination or to have any mechanism for ensuring that these justifications take place. We found shortcomings for almost all of the agencies in our review in these areas. Additionally, findings from Inspectors General have identified similar concerns at other agencies, indicating that a lack of readily available justifications may extend beyond the agencies covered under this review. GSA has not examined these regulations. As a result, GSA may be missing an opportunity to help ensure that agencies are appropriately justifying all vehicles in their fleet and determining if their leased vehicle fleet contains vehicles that should be eliminated. In the absence of an FPMR requirement, federal internal control standards can help agencies use their authority to be responsible stewards of government resources. However, because some agencies’ processes do not consistently facilitate the identification of underutilized vehicles, these agencies may not know which vehicles should be eliminated. Specifically, without readily accessible written justification, agencies are limited in their ability to exercise oversight over key vehicle retention decisions for vehicles that cost millions of dollars annually. Additionally, some agencies have not eliminated or reassigned vehicles that did not meet utilization criteria or pass a justification review. By not taking corrective action, agencies could be spending millions of dollars on vehicles that may not be needed. To help improve the accuracy of Drive-thru data to allow agencies to better manage their leased-vehicle fleet data, we recommend that the Administrator of GSA evaluate the 9,999-mile/month electronic safeguard for Drive-thru odometer readings to determine if a lower threshold could improve the accuracy of customer data and adjust this safeguard accordingly. To provide better assurance that Fleet Service Representatives (FSR) are having conversations with leasing customers about utilization in accordance with GSA expectations, we recommend that the Administrator of GSA develop a mechanism to help ensure that these conversations occur. To help strengthen the leased-vehicle justification processes across federal agencies, we recommend that the Administrator of GSA examine the FPMR to determine if these regulations should be amended to require that vehicle justifications are clearly documented and readily available, and adjust them accordingly. To improve the justification process, we recommend that the Secretary of the Department of Defense should direct the Secretary of the Air Force to modify the current process to ensure that each leased vehicle in the agency’s fleet meets the agency’s utilization criteria or has readily available justification documentation. To improve their justification process, we recommend that the Secretary of the Department of Veterans Affairs should direct the Under Secretary for Health to modify the current process to ensure that each leased vehicle in the agency’s fleet meets the agency’s utilization criteria or has readily available justification documentation. To facilitate the elimination of unnecessary vehicles, we recommend that the Secretary of the Department of the Interior should direct the NPS Director to take corrective action to address each leased vehicle that has not met the agency’s utilization criteria or passed the justification process. This corrective action could include (1) reassigning vehicles within the agency to ensure they are utilized or (2) returning vehicles to GSA. To facilitate the elimination of unnecessary vehicles, we recommend that the Secretary of the Department of Veterans Affairs should direct the Under Secretary for Health to take corrective action to address each leased vehicle that has not met the agency’s utilization criteria or passed the justification process. This corrective action could include (1) reassigning vehicles within the agency to ensure they are utilized or (2) returning vehicles to GSA. We provided a draft of this report to GSA; to the Departments of Defense, Interior, and Veterans Affairs; and to NASA for review and comment. GSA and the Departments of Defense, Interior, and Veterans Affairs provided written comments in which they concurred with our recommendations. These comments are reproduced in appendixes III-VI. NASA provided no comments. In written comments, GSA stated that it agreed with the three recommendations directed to it and is developing a comprehensive plan to address them. In written comments, the Department of Defense (DOD) concurred with the recommendation directed to it and stated that it would publish a policy memorandum in the second quarter of fiscal year 2016 that will direct DOD fleet managers to ensure that each leased vehicle in the agency’s fleet meets agency utilization criteria or has readily-available justification documentation. If implemented as planned, this action should meet the intent of the recommendation. In written comments, Interior concurred with the recommendation for NPS to take corrective action to address each leased vehicle that has not met the agency’s utilization criteria or successfully passed the utilization justification process and specified the actions that NPS, as well as BIA, are implementing or planning to enhance their leased-vehicle programs. For example, Interior stated that NPS is implementing actions to ensure vehicle justifications reside in the Department’s Financial and Business Management System and plans to review the current guidelines to establish reliable and consistent utilization metrics. In addition, Interior stated that NPS plans to develop processes to ensure justifications are on file and rotate underutilized vehicles to locations to increase the efficiency and effectiveness of its fleet. If implemented as planned, these actions should meet the intent of the recommendation. Interior also stated that BIA is establishing an electronic document repository to ensure accessibility of fleet management documents, transitioning to standard fleet-utilization forms, and conducting a leased-vehicle miles-driven utilization analysis to determine an annual mileage minimum requirement. In written comments, VA concurred with the two recommendations directed to it and specified the actions it has taken or plans to take to address them. Related to the recommendation to modify their current process to ensure that each leased vehicle in the agency’s fleet meets the agency’s utilization criteria or has readily-available justification documentation, VA stated in its letter that VHA agrees that GSA-leased vehicles should either be used frequently enough to achieve the agency’s utilization criteria or have readily-available justification documentation. VA stated that, subsequent to our review, VHA’s fleet program took action to ensure local fleet management programs correct deficient documentation on vehicles identified in our review that did not meet the agency’s utilization criteria. Specifically, VA stated that VHA’s fleet program requested Veterans Integrated Service Networks to solicit local fleets to justify any vehicles that had insufficient justifying documentation during our review. In addition, to help ensure that local fleet management programs are complying with current documentation requirements and to improve oversight of the programs, VA stated that the Office of Capital Asset Management Engineering and Support would issue written reminders to local fleet programs and monitor and audit utilization reports. VA included a target completion date of January 2017. If implemented as planned, these actions should meet the intent of the recommendation. Related to the recommendation to take corrective action to address each leased vehicle that has not met the agency’s utilization criteria or passed the justification process, VA concurred and stated that this corrective action could include reassigning vehicles within the agency to ensure they are utilized or returning the vehicles to GSA. VA stated that VHA would take corrective action and included a target completion date of January 2017. If implemented as planned, these actions should meet the intent of the recommendation. While VA agreed with our recommendations to address underutilized vehicles, it disagreed with our conclusion that 14 percent of VHA’s leased fleet is “unneeded, costing taxpayers an unnecessary $3 million.” Based on our analysis of VA data, our report found that VHA paid $3 million in fiscal year 2014 for leased vehicles that did not meet utilization criteria and did not have readily available justifications. These vehicles accounted for 14 percent of the selected vehicles in VHA’s leased fleet. We did not state that these vehicles were unneeded. We did state, however, that without justifications or corrective actions, agencies could be spending money on vehicles that may not be needed. As discussed above, VA described actions taken subsequent to our review to address some of the issues we identified, and also reported in its written comments that the most recent data show that less than 1 percent of VHA’s total current leased vehicle fleet may not be fully utilized. This number reflects two differences from our calculation. First, in general comments on the draft report, VA stated that there are now 381 vehicles for which it cannot determine if the vehicle met utilization criteria, if the vehicle had a justification, or if VA is aware that the vehicle did not meet utilization criteria or have a justification. Based on our analysis, we found 917 such vehicles among VHA’s selected leased vehicle fleet in fiscal year 2014, a difference of 536 vehicles. As described in the report, we analyzed fiscal year 2014 data for five selected agencies because it was the latest completed fiscal year at the time of our review. We agree that the actions taken subsequent to our review, as well as VHA’s planned actions, should address the issues we identified and should meet the intent of the recommendations. However, we have not reviewed the documentation nor verified the data on which VA’s new percentage is based. Second, VA’s new percent is the percentage of all of VHA’s leased vehicle fleet, not the percentage of selected leased vehicles that were part of our review. For the five agencies in our review, all of our percentages were calculated as a percentage of the number of leased vehicles that were selected for review, not of the agency’s entire leased vehicle fleet. As discussed in more detail in the report, we did this to consistently exclude vehicles such as tactical or law-enforcement vehicles. Thus, we continue to believe that our conclusion is valid. GSA, Interior, and VA also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to interested congressional committees, the Administrators of GSA and NASA, and the Secretaries of the Departments of Defense, Interior, and Veterans Affairs. In addition, this report will be available for no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-2834 or rectanusl@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. We conducted a review of the utilization of GSA’s leased vehicles. This report assesses: (1) the extent to which GSA data on leased vehicles are reliable, (2) GSA’s role in identifying and reducing underutilized leased vehicles, and (3) the extent to which the assessment processes used by selected federal agencies facilitate the identification and removal of underutilized leased vehicles, and any cost savings that could be achieved by reducing any underutilized vehicles. To determine the extent to which GSA’s data for leased vehicles are reliable, we examined the reasonableness of data contained in GSA’s internal fleet management database (Drive-thru) and the Federal Automotive Statistical Tool (FAST), a web-based reporting tool co- sponsored by GSA and the Department of Energy. For the purposes of this review, reliability is defined by two key components: reasonableness and indications of accuracy. We also tested a selection of Drive-thru data (reflecting approximately 162,000 vehicles) for indications of accuracy. GSA is responsible for the reasonableness of data in Drive-thru and FAST. We used three key sources to develop a standard for reasonableness, as there is currently no single federal criterion for a measurement of reasonableness. The three key sources included (1) prior GAO work that provided guidance on how to assess the reliability of data; (2) OMB’s Circular A-123, which defines management’s responsibility for internal controls in the federal government; and (3) GAO’s Green Book, which provides standards for internal control in the federal government. The key practices surrounding the standard of measurement that we developed for reasonableness are: electronic safeguards, such as error messages for out of range entries or inconsistent entries; the extent to which GSA reviews data samples to ensure that key data fields are non-duplicative and sensible; and the clarity of the guidance that GSA provided to ensure consistent user interpretation of data entry rules. As agencies are responsible for the accuracy of data in FAST, not GSA, we only examined Drive-thru for indications of accuracy. We focused on approximately two dozen data elements contained in the Fuel Use Report and the Inventory Report as these related most to costs associated with utilization and federal fleet reporting. To this end, we requested data from GSA for all GSA-leased vehicles that were continuously leased by the same agency from January 1, 2015, through May 21, 2015. We requested continually leased vehicles because we anticipated making month-to-month data comparisons. However, this historical comparison was not feasible as GSA does not store some historical data in its Fleet Management Information System database, which provides information to Drive-thru. Therefore, the inventory data pulled from GSA’s database were a “snapshot” of the federal fleet as of May 21, 2015, although the fuel data reflected the months of January-April 2015. Once the data were obtained, we conducted a variety of logic checks to locate any anomalies that might provide insight into the extent which GSA ensures the accuracy of Drive-thru data. For example, one of the logic checks we performed on these data included counting vehicles and determining whether at least one purchased fuel type over a 4-month time period failed to match the vehicle’s fuel type (accounting for vehicles that could potentially use more than one fuel type). This logic check was performed to determine how often, if at all, fuel was erroneously coded at the fuel pump. For objectives 2 and 3, we judgmentally selected five federal vehicle fleets from five federal agencies, including the U.S. Air Force (Air Force); U.S. Department of the Interior’s National Park Service and Bureau of Indian Affairs; National Aeronautics and Space Administration; and U.S. Department of Veterans Affairs’ Veterans Health Administration. We made our selection based on the following criteria: (1) varying fleet sizes, but none smaller than 1,000 vehicles; (2) a combination of military and civilian fleets; (3) a combination of fleets with mileage-based utilization levels above and below federal mileage-based utilization guidelines; (4) fleets that had not been audited by an organization other than GAO within the last 3 years; and (5) other considerations such as use of telematics and adoption of utilization criteria other than the mileage guidelines in GSA regulations. We selected these fleets, which according to GSA in 2014 ranged in size from 1,574 to 13,954 vehicles to broadly discuss the experiences and practices across a section of the federal fleet. These results are not generalizable to their overarching agencies or other federal agencies. To determine what GSA’s role is in identifying and reducing underutilized leased vehicles, we reviewed and analyzed relevant federal laws, regulations, executive orders, and GSA guidance to federal agencies for preparing VAM submissions. We described GSA’s role based on the responsibilities delineated in those documents. We also interviewed GSA officials, including Fleet Service Representatives (FSR) to better understand the role they play when working with federal agency fleet managers to identify underutilized leased vehicles. To corroborate information that GSA officials told us about FSRs speaking with their agency fleet managers at least once a year to assist in identifying underutilized leased vehicles and to determine any value that fleet managers assign to these conversations, we administered a non- generalizable, mixed-method questionnaire to 68 federal agency fleet managers. To ensure that our questions were meaningful and that we received accurate survey data, we pre-tested our survey with four representatives from four of our selected agencies. Using GSA’s Drive- thru data, we selected fleet managers for our five selected federal agencies who were responsible for at least 20 GSA leased vehicles. Through interviews with agency officials and FSRs, we learned that the contact information in Drive-thru was not sufficiently reliable for our purposes. Specifically, two of four FSRs that we spoke with and officials from two selected agencies reported that Drive-thru does not contain reliable contact information for individuals who would have conversations with FSRs. These officials reported that some of the contacts in Drive-thru were actually end-users, such as contractors. In other cases, the contact information was outdated. To address this, we requested that the selected federal agencies provide us with lists of current fleet managers within their agencies, and we matched those names to the list of fleet managers from the Drive-thru data. Agencies that were unable to provide independent lists of fleet managers verified which individuals from the Drive-thru data were in the fleet manager’s role at their agency and would be the appropriate individuals with whom to discuss utilization. This matching and verification process brought the survey selection pool to 114 fleet managers, yielding a reasonable number of contacts for BIA, NASA, and NPS—given their respective fleet sizes. However, our matching and verification process resulted in four fleet managers for Air Force and 80 for VHA. Since other fleet managers on Air Force’s list of current fleet managers met our survey pool parameters, we took a random sample of 16 fleet managers to add to the four we identified during the matching and verification process. Also, to avoid over- representing VHA, we randomly chose one fleet manager from 19 Veterans Affairs’ regions. We sent the survey to a total of 69 fleet managers as follows: 12 at BIA; 12 at NPS; 6 at NASA; 20 at Air Force ; and 19 at VHA. However, during the survey period, Air Force informed us that one of the selected fleet managers’ roles no longer included responsibilities for GSA-leased vehicles. Therefore, the total number of selected fleet managers in the survey pool totaled 68. Fifty one of the 68 fleet managers completed our survey, yielding a 75 percent response rate. As noted in our report, findings from this survey effort are not generalizable. To determine the extent to which the assessment processes used by selected federal agencies facilitate the identification and removal of underutilized leased vehicles, we reviewed and analyzed: pertinent federal laws and regulations; GSA guidance that described the VAM process; and internal policies and procedures that the selected federal agencies use to identify underutilized vehicles in five fleets, such as fleet handbooks; and interviewed officials from GSA and the five federal agencies about the agencies’ responsibilities in identifying underutilized leased vehicles. We then compared these processes to federal internal control standards related to record keeping and management as well as stewardship of government resources, as described in the 1999 Green Book. To calculate the costs of the vehicles involved in these processes, we conducted a multi-step analytical process. First, we asked GSA to provide data on passenger vehicles and light trucks that were continuously leased from GSA during fiscal year 2014 (i.e., from October 1, 2013-September 30 2014, inclusive) for the five selected federal fleets. Table 7 shows how we defined passenger vehicles and light trucks for the purposes of this review. We focused on vehicles that GSA leased on a continuous basis (i.e., to a single agency) for at least fiscal year 2014 so that the agencies were fully accountable for the selected vehicles’ mileage over the entire fiscal year 2014 time period. We scoped our work to include light trucks and passenger vehicles because they comprise the majority of GSA’s continuously leased fleet at 65 percent and 27 percent, respectively. We also asked GSA to exclude tactical, law-enforcement and emergency- responder vehicles from the selected vehicle population, as well as vehicles located outside of the continental U.S. We made these exclusions because, according to GSA officials, some agencies did not want law enforcement data, for example, released outside of GSA because it could be considered sensitive. In addition, we needed to develop a manageable, selected population given the time resources needed to investigate each vehicle. After receiving the data from GSA, we conducted various analytical tests to develop a dataset that was free from detectable errors. For example, we examined data on current and previous monthly odometer readings. We then determined which vehicles in the dataset had a current monthly odometer reading that was lower than the previous month’s odometer reading. This allowed us to determine which vehicles likely had errors associated with their end-of-fiscal year mileage—allowing us to remove them from the population of analysis. We also analyzed over 15,500 fiscal year 2014 vehicle records from the five agencies that we reviewed. In total, selected vehicles from these agencies accounted for about 8 percent of the federally leased fleet, although the findings associated with this selection are not generalizable. Next, we determined which selected passenger vehicles and light trucks at each agency did not meet the miles-traveled guidelines in the Federal Property Management Regulations in fiscal year 2014 (12,000 miles and 10,000 miles, respectively). We then sent a list of the selected vehicles that had not met the miles-traveled guidelines to each agency and requested that they group the vehicles into one of the categories described below and depicted in figure 1: Group 2: No longer leased by the agency as of May 21, 2015; Group 4: Met a mileage-based utilization criteria defined by the Group 5: Met a non-mileage-based utilization criteria defined by the Group 6: Had a written justification in lieu of meeting the utilization criteria that the agency defined; Group 8: Was repurposed, given additional tasks, or reassigned within the agency during fiscal year 2015; and Group 9: Was retained beyond May 21, 2015, despite not meeting agency-defined utilization criteria, possessing a written justification for retention, or being given other tasks. We also asked agencies to identify vehicles that they could not categorize and reasons why—such as vehicles’ lacking readily auditable documentation, including information on whether the vehicle met the agency-defined utilization criteria in fiscal year 2014 (Group 3) and written justification for retaining vehicles that did not meet the agency-defined utilization criteria (Group 7). As these two groups—and vehicles in Group 9—stem from insufficient agency processes to identify and remove leased vehicles, we focused on determining the costs associated with the vehicles in these groups. Agencies were responsible for categorizing each of the vehicles that GAO provided to them. We provided the agencies with each vehicle’s license plate number, VIN number, make, model, and other identifying information to assist in this process. We did not verify whether agencies categorized vehicles correctly, as some of the information necessary for these categorizations was contained within agency systems and records (for example, if the vehicle met an agency-defined criteria or if the vehicle was repurposed). However, to evaluate the overall reliability of agencies’ vehicle justification, we selected a small sample of vehicles from each agency and then requested written justifications from each of those agencies that reported that they had written justifications for vehicles. We removed vehicles from the selected population if agencies reported that the vehicle should have been excluded from the review (for example, vehicles that agencies reported were law enforcement vehicles but not labeled as such in GSA’s system). We also removed vehicles if the VIN number that the agency provided did not match the VIN from the original information that GSA provided and vehicles that agencies categorized in more than one group, among other data-cleaning efforts. We determined the cost paid to GSA for each vehicle in each of the 9 groups using data from GSA. For each vehicle, we summed the following: the vehicle’s fiscal year 2014 mileage rate multiplied by the total number of miles the vehicle traveled in fiscal year 2014; per-mile costs for additional equipment multiplied by the total number of miles the vehicle traveled in fiscal year2014; the fixed monthly mileage rate for additional equipment multiplied by 12 (for the 12 months of the fiscal year); and any flat monthly rate charges multiplied by 12 (for the 12 months of the fiscal year). These costs represent the amount an agency paid to GSA for each vehicle in fiscal year 2014. However, these costs do not include other costs incurred by the leasing agency, such as the salaries of their fleet managers or the costs to garage the vehicles. Also, we did not have information on the opportunity costs of alternatives to replacing these leased vehicles. For example, if a vehicle is removed from an agency’s fleet and another vehicle is used more frequently as a result, the agency would still pay for miles traveled or trips made by the other mode of transportation. Therefore, the costs associated with the groups are annual costs paid to GSA, and an undetermined percentage of these costs would reflect actual cost savings if vehicles were removed. We conducted this performance audit from February 2015 to January 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. According to NASA policy, each NASA center should conduct an annual review of fleet utilization during the third quarter of each fiscal year. The review first identifies vehicles that fail to meet the minimum utilization goals, also called the “utilization target point.” The target point is calculated by multiplying the average usage by 25 percent (0.25) for each vehicle type, such as sedans/station wagons, ambulances, intercity busses, and trucks with a gross vehicle weight of less than 12,500 pounds. In fiscal year 2014, sedans and trucks less than 12,500 pounds were required to meet the mileage target points shown in table 9 at their respective centers: According to NASA policy, individual vehicles within each vehicle type whose range falls below the utilization target point will be added to the “utilization target list”. Programs, missions or departments with vehicles on the target list are required to submit a new justification form for each individual vehicle on the list for center review and retention approval. These justifications are then evaluated during the annual review process, with possible outcomes including reassignment within the center, exchanging the vehicle for a different type of vehicle that better suits the mission, or returning the vehicle to GSA. In addition to the contact named above, John W. Shumann (Assistant Director), Melissa Bodeau, Jennifer Clayborne, Monika Jansen, Davis Judson, Terence Lam, Malika Rice, Jerome Sandau, Alison Snyder, Michelle Weathers, Crystal Wesco, and Elizabeth Wood made key contributions to this report. | Federal agencies spent about $1 billion in fiscal year 2014 to lease about 186,000 vehicles from GSA. Assessing the utilization of leased vehicles is important to agency efforts to manage their fleet costs. GAO was asked to examine federal processes for assessing the utilization of leased vehicles. This report addresses, among other objectives, (1) GSA's role in identifying and reducing underutilized leased vehicles and (2) the extent to which the processes used by selected federal agencies facilitate the identification and removal of underutilized leased vehicles, and any cost savings that could be achieved by reducing underutilized vehicles. GAO selected five agencies using factors such as fleet size, and analyzed over 15,500 fiscal-year 2014 vehicle records. At the five agencies, GAO surveyed fleet managers with at least 20 leased vehicles; reviewed fleet policies and guidance; and interviewed federal officials. These findings are not generalizable to all agencies or fleet managers. The General Services Administration (GSA) provides guidance to agencies to assist them in reducing underutilized leased vehicles. This guidance can be written (such as bulletins) or advice from GSA's fleet service representatives (FSR) to agency fleet managers. FSRs assist agencies with leasing issues, and GSA expects its FSRs to communicate with fleet managers about vehicle utilization at least annually. However, 18 of 51 fleet managers GAO surveyed reported that they had never spoken to their FSR about vehicle utilization. GSA has no mechanism to ensure these discussions occur and therefore may miss opportunities to help agencies identify underutilized vehicles. While the selected agencies—the Air Force, the Bureau of Indian Affairs (BIA), the National Aeronautics and Space Administration (NASA), the National Park Service (NPS) and the Veterans Health Administration (VHA)—took steps to manage vehicle utilization, their processes did not always facilitate the identification and removal of underutilized vehicles. Certain selected agencies (1) could not determine if all vehicles were utilized, (2) could not locate justifications for vehicles that did not meet utilization criteria, or (3) kept vehicles that did not undergo or pass a justification review. These agencies paid GSA about $8.7 million in fiscal year 2014 for leased vehicles that were retained but did not meet utilization criteria and did not have readily available justifications (see table). Of the selected agencies, NASA and VHA did not apply their utilization criteria to nearly 400 vehicles, representing about $1.2 million paid to GSA in fiscal year 2014. However, these agencies have taken steps to rectify the issue. The Air Force, BIA, NPS, and VHA could not readily locate justifications for over 1,500 leased vehicles that did not meet utilization criteria, representing about $5.8 million. BIA and NPS are planning action to ensure justifications are readily available in the future. As of May 2015, NPS and VHA had retained more than 500 vehicles—costing $1.7 million in fiscal year 2014—that were not subjected to or did not pass agency justification processes. While costs paid to GSA may not equal cost savings associated with eliminating vehicles, without justifications and corrective actions, agencies could be spending millions of dollars on vehicles that may not be needed. GAO recommends, among other things, that GSA develop a mechanism to help ensure that FSRs speak with fleet managers about vehicle utilization, that the Air Force and VHA modify their processes for vehicle justifications, and that NPS and VHA take corrective action for vehicles that do not have readily accessible written justification or did not pass a justification review. Each agency concurred with the recommendations and discussed actions planned or underway to address them. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.